Yearly Archives: 2013
Dec 13, 2013 Michael Madison
A debate continues to brew about the proper interpretation of the Computer Fraud and Abuse Act (CFAA), the federal statute that imposes criminal penalties on individuals who access computer networks without authorization. For at least a decade, scholars and a growing number of courts have wondered whether the owner of a computer network could define “authorization” using form “terms and conditions” of the sort often presented to consumers who purchase or use digital services. If that strategy were successful, then someone who clicked “I Agree” on a digital form yet failed to comply with all of its terms might be accused – even convicted – of the federal crime specified by the CFAA.
Andrea Matwyshyn uses that apparently technical problem to revisit a much larger question: When, whether, and how the law should treat computers and computer networks as special in any way when dealing with a host of doctrinal and policy issues: commercial law, intellectual property law, telecommunications law, antitrust law, criminal law, and so on? This was the subject of a famous scholarly debate back at the turn of the 21st century between Lawrence Lessig, who argued that considering a “law of cyberspace” offered commentators access to potentially valuable insights about how people interact with each other, and Judge Frank Easterbrook, who accused cyberspace promoters of constructing an unworkable and unhelpful “law of the horse.” No one “won” the debate in its original form, but in the late 1990s the question was mostly academic, literally. Too few law and policy judgments turned on the answer to make the debate matter in any but a conceptual or theoretical sense.
Matwyshyn’s “The Law of the Zebra” suggests that the answer does matter in a concrete set of cases, and she has the case reports to show it. Her answer is that both Lessig and Easterbrook were right: There is something special about computers and computer networks. But what’s special about them is that judges should not be seduced into treating them as something new and strange. Most of the time, the common law deals with them, or should deal with them, just fine. Courts that fail to remember that fact are dealing in a “law of the zebra,” an unusual creature, rather than a law of the horse (of course), an ordinary and more common animal.
In one sense, then, the article resumes a dialogue about metaphorical treatments of the Internet that captured the imaginations of a host of legal scholars a decade or so ago, including me. Is cyberspace a thing? A place? A frontier? A horse? A zebra?
That question has no single answer, and Matwyshyn is smart enough not to propose one. Instead, she wants to show how the alleged specialness of computer networks leads courts astray. The CFAA and breaches of relevant contracts form the doctrinal backbone of an inquiry into techno-exceptionalism.
She shows how courts have dealt with contract formation and breach of contract questions in computer access contexts in inconsistent ways, and how that inconsistency has affected application of the CFAA. She identifies her normative baseline – a series of related principles or propositions that define a common law contract framework – and argues that in contract formation questions, a degree of techno-exceptionalism is warranted; in contract interpretation and enforcement contexts, “regular” contract law will do. Using four paradigmatic examples of “types” of computer hackers who might breach agreements with network providers – the sorts of people the CFAA was arguably drafted to deal with – she shows how her balanced form of “restrained technology exceptionalism” treats CFAA/contract law intersections. Ordinary contract remedies are sufficient to deal with the harms that result from most types of unauthorized network access linked to bypassing agreed-to terms and conditions. She argues that adding criminal liability under the CFAA to those remedies amounts to a sort of “weaponized” breach of contract that is warping basic contract law as applied in computer contexts, is bad policy, and arguably conflicts with Constitutional law prohibiting peonage. The proper way to look at the CFAA/contract interface, she argues, is through the prism of private ordering, a framework that is consistent both with Lessig’s view of cyberspace law (in which computer networks present novel forms of private ordering for fresh normative evaluation) and Easterbrook’s (in which existing doctrinal categories were more than adequate to that normative task).
On the doctrinal question, is she right? Possibly. But the doctrinal means are less important here than the policy ends. In effect, Matwyshyn argues that contract remedies should preempt CFAA liability where the two overlap. That sort of “reverse federalism” (“reverse” because, of course, we rarely think of state law preempting federal law) is, in a perverse way, quite consistent with a heterogeneous, anti-one-size-fits-all view of the Internet. Matwyshyn is not making an appeal to an idealized “information wants to be free” fantasy. Instead, she points out that the real policy at stake in interpretations of the CFAA, and in metaphorical debates about horses and zebras, is information security. Linking criminal liability under the CFAA to breaches of the standardized, form-based terms and conditions that are essentially ubiquitous on the Internet trivializes the idea of access and undermines incentives for network providers to care properly for information that they truly care about.
Nov 12, 2013 James Grimmelmann
Technologies do not come with social or legal instruction manuals. There is nothing inherent in rooftop strobe light bars to suggest that police may use them but not civilians, or in thermal imaging cameras to suggest the reverse. The public must figure out what to do with each technology as it becomes available: embrace, ignore, regulate, ban. If we are lucky, the rules distinguishing acceptable from forbidden uses can come, over time, to seem like natural features of the technology itself. But they are not: the rules have to come from somewhere, and someone had to work them out, somehow.
For an example, consider today’s debates on what to do about drones. Or for another, consider spam, the subject of Finn Brunton’s erudite and entertaining Spam: A Shadow History of the Internet. Brunton pushes his history far back before the 1994 advertisment from a pair of immigration lawyers that is usually thought of as spam’s Ground Zero. He notes, for example, a 1971 antiwar message sent to every user of the Compatible Time-Sharing System and a 1978 announcement of a DEC computer demonstration sent to all West Coast ARPANET users–both of which provoked debate around the acceptable boundaries of network use. Brunton argues that well into the 1990s, spamming was considered a primarily social offense, separate and distinct from commercial self-promotion, and of an entirely lesser order than “net abuse” (P. 39) like crashing computers. Spam was a form of free speech, and like other inappropriate speech was to be met with censure rather than censorship.
But this attitude changed, and changed sharply, as the first wave of commercial spammers arrived en masse. Unlike the earlier “spammers,” who could be telephoned and reasoned with, or shamed into silence, or simply identified and ignored by users’ personal message filters, these new operators both flaunted their identity as outsiders to close-knit online communities and aggressively covered their tracks to keep the messages getting through. In the face of these new actors, Brunton shows that spam was effectively redefined as a legal and technical problem rather than a social one. To many antispam activists, the great danger of CAN-SPAM was that it would legitimize spam. But the combination of a legislative framework with reasonably effective filtering had another effect entirely–it “destroyed email spam as a reputable business model,” (P. 143) and “eliminated the mere profit-seeking carpetbaggers and left the business to the criminals.” (P. 144)
Spam is thoughtful about the ontology of its namesake. We are accustomed to thinking of spam as an email phenomenon. But, as Brunton effectively demonstrates, email spam is only one instance of a much larger pattern. Today there are Facebook spam, LinkedIn spam, blog comment spam, Twitter spam–and many more. Indeed, spam’s contested definitions create any number of difficult boundary cases. Gmail’s inbox tabs shunt “Promotions” into a separate folder, even when the recipients have affirmatively opted into receiving these emails. Or, to take one of Brunton’s examples, Demand Media “commissions content from human writers (who are willing to meet very low standards for very little money) on the basis of an algorithm that determines ad revenue over the lifetime of any given article.” (P. 162)
Brunton’s own definition of spam, offered at the end of the picaresque tour, is “the use of information technology infrastructure to exploit existing aggregations of human attention.” (P. 199) Both halves are exactly on point. Spam is medium- and technology- agnostic, but it is inherently a technological phenomenon: without the amplifying power of commodity copying, spam’s characteristic bulk is impossible. And spam is essentially a problem of attention hijacking: the systematic conscription of large and diffuse audiences by abusive speakers.
Much of Brunton’s story of spam is told through the eyes of its enemies, from the vigilantes who made tried to burn out commercial spammers’ fax machines to the modern programmers who build increasingly complex filters to identify and delete spam. Significantly, this is history through the eyes of its losers: the story of the tide as related by King Canute. Brunton conveys effectively the sheer frustration felt by anti-spam activists. The network they loved was being abused by outsiders who pointedly rejected their values, but they found themselves unable to stop the abuse. One countermeasure after another fell before the onslaught: killfiles, cancelbots, keyword filters, blackhole lists, and so many others.
Roughly the second half of the book is devoted to the remarkable technical evolution of computer-generated spam. Brunton traces the rise of keyword stuffing, hidden text, Oulipo-esque email generators, spam blogs, content farms, Mechanical Turk-fueled social spam, CAPTCHA crackers, Craigslist bots, malware as a source of spam, and online mercenaries renting out botnets to the highest bidder. This escalation–from a pair of immigration lawyers in over their heads to a “criminal infrastructure” industry (P. 195) in less than two decades–is nothing short of alarming.
Spam is also one of the most nuanced books to unpack what makes the postmodern post-Web 2.0 Internet tick. Borrowing Matt Jones’s concept of “robot-readable” media–“objects meant primarily for the attention of other objects” (Pp. 110-11)–Brunton gives an insightful metaphor of the uneasy coexistence of human and software readers online:
Consider a flower–say, a common marsh marigold, Caltha palustris. A human sees a delightful bloom, a solid and shiny yellow … A bee, meanwhile, sees something very different: the yellow is merely the edging around a deep splash of violet invisible to human eyes–a color out on the ultraviolet end of the spectrum known as “bee violet.” It’s a target meant for the creature that can fly into the flower and gather pollen. The marsh marigold exists in two worlds at once. (P. 110)
The visible language of QR codes and the invisible language of HTML tags are not meant for human consumption. They are there for our computers, not for us. But when we rely on those computers to find interesting things and show us the results, we leave ourselves open to a new kind of vulnerability:
If their points of weakness can be found, it is quite possible to trick our robots, like distracting a bloodhound with a scrap of meat or a squirt of anise–giving it the kind of thing it really wants to find, or the kind of thing that ruins its process of searching. The robot can be tricked, and the human reached: this is the essence of search engine spamming. (P. 113)
Brunton describes the current state of affairs, in which spammers and spam filters are locked in an arms race to master human linguistic patterns, as a parody of the Turing Test, “in which one set of algorithms is constantly trying to convince the other of their acceptable degree of salience–of being of interest and value to the humans.” (P. 150) And in the book’s conclusion, he circles back to spam’s central irony:
Indeed, from a certain perverse perspective … spam can be presented as the Internet’s infrastructure used maximally and most efficiently, for a certain value of “use.” … Spammers will fill every available channel to capacity, use every exploitable resource: all the squandered central processing unit cycles as a computer sits on a desk while its owner is at lunch, or toiling over some Word document, can now be put to use sending polymorphic spam messages–hundreds a minute, and each one unique. So many neglected blogs and wikis and other social spaces: automatic bot-posted spam comments, one after another, will fill the limits of their server space, like barnacles and zebra mussels growing on an abandoned ship until their weight sinks it. (P. 200)
Spam, in other words, is the cancer of the Internet. It is not an alien organism bent only invasion and destruction. Rather, it takes ordinary healthy communications and extrapolates them until they become grotesque, obscene, deadly parodies of themselves. Spam is constantly mutating, and it cannot be extirpated, not without killing the Internet, because the mechanisms they rely on to live are one and the same. The email is coming from inside the house.
Sep 10, 2013 Rebecca Tushnet
Kristelia Garcia, Private Copyright Reform, 20 Mich. Telecomm. & Tech. L. Rev. (forthcoming 2013), available at SSRN.
Differential regulation of different technologies is baked into many of our laws, often on the basis of outdated assumptions, relative political power at the time of enactment, or other quirks. Internet exceptionalism is common, but perhaps nowhere more galling than as applied to music in the US, where the interests of terrestrial radio and music copyright owners combined to produce a regime so tangled that to call it ‘Byzantine’ is an insult to that empire.
Kristelia Garcia dives deep into the details of digital music law, focusing on two case studies that she finds promising and troubling by turns. While opting out of the statutory scheme may well be locally efficient and risk-minimizing for the participants, some of the gains come from cutting artists out of the benefits. Other third parties may also be negatively affected if statutorily set copyright royalty rates are influenced by these private deals without full recognition of their specific circumstances, or if adverse selection leaves the collective rights organizations (CROs) that administer music rights too weak to protect the interests of the average performer or songwriter. Garcia’s paper suggests both that scholars must keep an eye on business developments that can make the law on the books obsolete and that specific legal changes are needed to protect musicians, songwriters, and internet broadcasters as part of the dizzying pace of change in digital markets.
Garcia reviews the complex structure of music law, which defies summary. Of particular interest, § 114 of the Copyright Act provides for statutory licensing of the digital performance right in sound recordings in some circumstances, with rights administered by an entity known as SoundExchange. Despite serving an ever-converging audience indifferent to how its music is delivered, radio providers pay stunningly varying amounts. The Internet radio provider Pandora pays more than 50% of its revenue in performance royalties, while satellite and cable providers pay roughly 1/6th the rate Pandora pays, and terrestrial radio broadcasters pay nothing at all. But §114 also allows copyright owners to negotiate with digital performing entities and agree on royalty rates and license terms that replace the statutory licenses.
Likewise, §115 allows voluntary negotiations for royalties that replace the statutory license for musical works. Historically, performance rights organizations, primarily ASCAP and BMI, have administered the performance rights in musical works. These private organizations are not creatures of statute like SoundExchange, but they operate under antitrust decrees, with “rate courts” that review their licensing fees.
In June 2012, Taylor Swift’s record label, Big Machine, cut a separate performance rights deal with radio giant Clear Channel, circumventing SoundExchange. Surprisingly, Clear Channel agreed to pay performance royalties even for “spins” on terrestrial radio, for which there is no sound recording performance right. In return, Clear Channel received a lower rate for digital performances, and possibly other special considerations such as access to unique content. It received certainty in the form of a rate calculated as a share of revenue, rather than the less predictable per-play rate that would be owed SoundExchange under the statutory license. Big Machine may take a short-term loss, but may also receive preferential treatment for its artists. Sharing revenue means that Clear Channel no longer has an incentive to limit the number of Big Machine songs it plays, as it does under the statutory per-play royalty—and that means Big Machine is likely to get more airtime to promote its artists, which may result in increased cross-promotional opportunities.
Presumably, the parties wouldn’t have bargained around the law unless the contract benefited them both. But that’s not the end of the story. Under the compulsory license, Clear Channel would pay SoundExchange a set fee per digital spin of We Are Never Ever Getting Back Together; SoundExchange would distribute most of that directly to featured and non-featured performing artists (the latter include backup singers and session musicians, who have far less negotiating power than stars) and to the copyright holder, in proportions required by the statute. By opting out of §114, Clear Channel and Big Machine avoided the need to pay artists. (As Garcia notes, in theory artists could eventually claw back a share of such deals from record labels by contract, but the point of §114’s required distributions was Congress’s determination that artists, especially non-featured performers, lacked sufficient bargaining power to secure fair compensation in the market.) They also avoided the need to pay a share of SoundExchange’s overhead. These savings are, from others’ perspectives, externalities that the parties imposed on nonparticipants.
The deal is also an attempt to ease the grossly unjustified disparity between digital and terrestrial broadcasters, but only for Clear Channel, and the fact that legal discrimination against other Internet broadcasters remains in place creates further complications. If Clear Channel can pay less than the near-crippling performance royalties than other digital broadcasters, it obtains a marked competitive advantage. The private deal additionally allows it to predict its costs with much greater certainty—through both congressional and administrative interventions, statutory rates have changed by orders of magnitude, and they have to be re-set every few years. Separately, if the long-delayed but much-desired general public performance right for sound recordings is ever enacted, requiring terrestrial broadcasters to pay royalties to record labels, Clear Channel could well end up with an advantage over its terrestrial competitors. Even if that gamble doesn’t pay off, if the future really is in digital radio, the lower digital rate may justify paying general performance royalties.
Garcia’s second example comes from the musical work side. Sony/ATV, a music publisher, accepted a lower performance royalty rate from DMX, a digital music service that provides music programming for retail stores and restaurants, in exchange for a large advance. Again, this deal enabled the parties to avoid paying artists, who evidently lack the power to force contractual change.
Moreover, songwriters—whether contracted to Sony/ATV or not—lost out because of the deal. DMX used the negotiated royalty rate (excluding the advance) to convince the rate court to lower rates across the board, reflecting the “market value” supposedly expressed in the Sony/ATV deal. A similar dynamic is possible with the Clear Channel/Big Machine agreement, since the market value of the digital performance right looks lower, despite the terrestrial performance royalties being paid as part of the overall package.
At the same time, Sony/ATV’s withdrawal lowered the value of an ASCAP license by shrinking the size of its repertoire, and by increasing transaction costs for future licensors. Sony/ATV ultimately withdrew all its digital content from ASCAP. This is going well for Sony/ATV—in early 2013, it signed a direct license agreement with Pandora that increased its royalties by 25% over the ASCAP rate—but that’s an unsurprising consequence of adverse selection. When the strongest participants with the most valuable catalogs opt out in order to cut their own deals, the average quality/value of the remaining catalogs goes down, making the CRO less desirable. And this is true even if, as a group and on average, the participants would maximize their return by banding together.
These private deals might seem to be more efficient than statutory licenses or CROs (which operate in the shadow of government regulation, including antitrust). But they are fundamentally shaped by pervasive government regulation of the music business for the past century. They are no more proof of the superiority of the “free market” than are commercially successful technologies first developed through government R&D funding. Indeed, to the extent that the rest of the market operates under pervasive legal constraints, these exceptional deals may be even more distortionary.
Still, Garcia argues, the benefits of certainty shouldn’t be discounted, especially those available through a revenue-based rather than per-play model. (Statutory rates are also capable of using revenue-based metrics, of course, and this has often been a part of proposals to fix the statutory digital performance royalty.) However, she points out that the interests of institutional stability are unlikely to coincide with the interests of artists.
Garcia uses her examples to examine arguments made by Robert Merges and Mark Lemley about the role of property rules and liability rules in encouraging the formation of private groups that engage in blanket licensing, as the CROs do: Merges argues for property rules as incentivizing the most efficient private arrangements, while Lemley contends that owners and users can and do also contract around liability rules where that’s more efficient. The existence of evasion of both compulsory licenses and private entities such as ASCAP, Garcia argues, complicates the analysis. Among other things, she suggests, the existing structures operate as starting points allowing the parties to frame defection in mutually agreeable terms, and provide a backup solution if negotiations fail. “Neither party has to commit to the terms vis a vis all partners, nor does either party have to engage in costly, multiple negotiations, since all extradeal licenses can continue under the compulsory regime.” (P. 40.)
This portion of the paper could have used more discussion of adverse selection, as well as the related issue that the music business includes a few massive, oligopolistic entities with significant market power. Sony/ATV already represents a large bloc of artists, or at least of works. In such a concentrated market, the CRO may seem like just another layer of bureaucracy. But smaller or new entrants may join a CRO like ASCAP on nondiscriminatory terms, an option Sony/ATV generally does not offer. Instead of disintermediation and diversity, the current market structure for digital music seems to favor greater concentration—a problem explored by Tim Wu in his recent book.
Legal innovation often occurs without formal legal change. The easiest to see is when enterprising lawyers discover a formerly neglected cause of action and sets off a wave of lawsuits in the same area. Here, the innovation is different—market participants didn’t bother to opt out of the statutory schemes for decades, but the visibility of Clear Channel and Sony/ATV’s decisions means that others are likely to at least consider similar moves. In a time of diminished profits and overshadowing of traditional business models, the opportunity to lock in royalty rates regardless of changed laws, rate court rulings, and the like may prove tempting.
Garcia offers two tweaks to existing law in the absence of comprehensive reform (since even its proponents agree that it will take a very long time). First, she argues, parties who circumvent compulsory licenses should be required to follow the statutorily mandated distributions to artists, so that negotiating parties can’t benefit simply by virtue of writing them out of the deal. Second, all aspects of any private agreements used as evidence to set statutory rates must be fully disclosed, to avoid misrepresenting a deal with extra components as a market rate. These are modest proposals, and quite convincing, though the second in particular is likely to draw objections around protecting trade secrets and business models.
A larger lesson is that arbitrage opportunities, such as those produced by the differing treatment of terrestrial and digital performances, won’t be ignored forever, especially when sophisticated businesses are involved. The Internet offers many opportunities to experiment with business models. But without a sensible legal structure, the experimentation may only be in aid of externalizing costs on other parties and suppressing competition.
May 20, 2013 Paul Ohm
Lauren Willis, When Nudges Fail: Slippery Defaults, 80 U. Chi. L. Rev. ___ (forthcoming 2013) available at SSRN.
If Jotwell is meant to surface obscure gems of legal scholarship, which might go unnoticed otherwise, I might be missing the point by highlighting a work forthcoming in the not-so-obscure University of Chicago Law Review on the au courant topics of nudges and liberal paternalism. But Lauren Willis’s new article, When Nudges Fail: Slippery Defaults, might escape the attention and acclaim it deserves as a work of information privacy law, so it is in that field I hope to give the article its due.
Willis’s article takes on the pervasive idea that all default choices are sticky. Defaults can sometimes be sticky, but Willis carefully deconstructs the economic, social, and technological preconditions that tend toward stickiness, and then demonstrates how firms can manipulate those conditions to render defaults quite slippery.
This article deserves to become a standard citation in information privacy law scholarship, important in at least three ways. Most obviously, the article uses online behavioral advertising and the Do Not Track debate as a recurring example, revisiting it throughout. This article makes a very useful contribution to the Do Not Track debate, which continues to rage.
Deeper, and more generally, the article delivers a blow—perhaps fatal—to the age-old “opt-in versus opt-out” debate. Should new, privacy invasive practices affect only people who opt-in to them, or should they instead apply to all except those who opt-out of them? Willis helps us understand that this debate, which has generated so much energy and discussion, may matter less than we think. “When firms have significant control over the process for opting out or the context in which the defaults are presented, firms can undermine the stickiness of policy defaults.” In other words, firms can, and do, encourage, cajole, push, and deceive into opting in those customers who rationally should not.
As proof, the heart of the article presents a lengthy examination of failed attempts by regulators to limit what they have seen as predatory bank practices surrounding checking account overdraft coverage. Although it might seem like an unequivocal convenience to have banks cover rather than reject ATM withdrawals and debit card payments from accounts with insufficient funds, because of the fees they charge for this “service”—$20 or even more—these amount to “low risk, high cost loan[s].” Low risk, because the bank is paid back automatically with the next deposit, and high cost—Willis gives a typical example amounting to an effective 7,000% APR. In some cases, these banks offer alternative services that provide basically the same protection at orders of magnitude less cost. For one who worries about consumer welfare, this is a maddening story, told with detail and care. Banks lied about the benefits of the coverage. They deluged the holdouts under a flood of paper and harassed them on the telephone. And in the end, they spurred droves of customers—according to one study, 75% of all customers and 98% of customers who overdraft more than ten times per year—to switch.
Going forward, one will be able to skim the first few footnotes of any article that uses the words “privacy,” “opt-in,” and “opt-out” to apply the “Willis Test.” Any such article that doesn’t cite this piece probably needn’t be taken seriously.
But at its deepest, and to my mind most interesting, level this article chips away at the faith we have placed in notice and choice, which is to say, at the foundation of most contemporary information privacy laws. Notice implies the transmission of accurate and fair information giving rise to fully informed consumers, and choice presupposes freedom of action and the absence of coercion. Watching the banks manipulate their customers into making bad choices brings home the challenges that face those who yearn for honest notice and choice. The lesson Willis offers repeatedly is crucial for information privacy law: companies control the messages that consumers see, and they are masters at manipulation.
And these are banks. Banks! Dinosaurs of the old economy that build websites that users merely suffer to use rather than enjoy and whose executives probably think UI and UX are stock ticker symbols. To flip the overdraft protection default, these fusty old companies resorted to costly Jurassic techniques involving the phone, ATM, and email account. Consider how much more consumers are outmatched by the media owners who run today’s engaging-to-the-point-of-addictive mobile apps and social networking sites. Online, consumers notice only what these master manipulators want them to notice and choose what they are preordained to choose.
This matters for information privacy a lot. We still rely on notice and choice as the most important tools regulators have to guarantee user privacy. Proposals for tackling new privacy concerns—from location tracking, to remote biometrics, to genomic information, and beyond—continue to center on creating the conditions for meaningful notice and consent. Willis’s article suggests that firms will provide clear notice and obtain meaningful choice only when they see no reason to oppose either choice, which is to say, when it doesn’t count for much. It might be time for regulators to reach for different tools.
We have known for some time that notice and choice are plagued by information quality problems. But Willis’s article demonstrates the still unmet need for scholarship that deconstructs the mechanics of how companies manipulate these problems to their benefit to subvert individual privacy. This builds on the groundbreaking work of scholars from outside law such as Lorrie Cranor and Alessandro Acquisti. And while legal scholars like Ryan Calo have built on this work (and I’ve started to do so, too), we need more of this. We need thorough and careful accounts of the landscape of notice and choice. With this article, this very necessary research agenda now has a fine blueprint.
Mar 18, 2013 Herbert Burkert
At a conference hosted jointly by Peking University Law School and the Carter Center, ex US-President Carter, as reported recently by freedominfo.org, a highly recommendable information source on access to government information by the way, encouraged the Chinese government “to take critical steps toward institutionalizing the right to information, including reviewing the experiences to date under the current Open Government Information regulation and developing it into a more powerful legal regime with the statutory strength of a law.”
What these “Regulations of the People’s Republic of China on Open Government Information of April 5, 2007, effective May 1, 2008” are about, how and why they came into existence and what is keeping them alive, is described in Weibing Xiao’s book. According to Xiao, a Professor of Law at Shanghai University of Political Science and Law, the fight against corruption did not cause this development, but rather administrative problems with managing secrecy led to first tentative research and policy initiatives for greater transparency. These initial steps were then encouraged by an improved information flow environment in which – also in part due to technological developments – information exchanges increased between administrations and between citizens and administrations. Xiao’s account suggests a push-model of government information, one which while being encouraged for all levels of government seems to be particularly vital on the local level, where it is supported by long-standing and far-reaching administrative reforms.
Beyond this historical-analytical account I recommend the book for four reasons:
The book provides a highly readable account of how sensitive legal-political subjects – sensitive because they are perceived as overly dependent on Western concepts of the democratic law state – find their way into Chinese research agendas, how they are challenged and how they eventually legitimize themselves. The reader interested in public law will note – perhaps with some surprise – the importance of the Chinese Constitution and administrative law reform in this context.
Secondly, while it is obvious that a lot remains to be done in China, the account also serves as a mirror for the historical dependencies and shortcomings in those countries that might see themselves as champions of access to government information. Following the arguments and counter-arguments in China, readers will recall similar debates in their own countries, including references to cherished historical practices of secrecy in the past. Xiao’s quotation from Lao Zi (老子) in this context “People are difficult to govern when there is too much knowledge”(民之难治, 以其智多 (P. 29.) may still reflect the thoughts of many government officials here when they are faced with transparency requests.
The third reason for recommendation goes well beyond the subject proper of the book: In his analysis of the Chinese situation the author makes constant references to information flows in society, their structures, their impact on social and political developments and to the importance of how and with which objectives to address them by regulation. This is information law properly freed from the appearances of technology. And this is why Xiao’s book belongs in the Cyberlaw section: We still have to make substantial efforts to discover what is accidental about technology and what is the essence of information and its flow. Reading Xiao and his account of the Chinese developments we get a critical assessment of some of those approaches. While not yet providing a detailed methodology himself, he encourages us to look more closely at what law does to information flows.
Let me add a fourth argument, a puzzling one perhaps, based perhaps even on a cultural misunderstanding, but so intriguing for someone like me to whom English – as to Professor Xiao – has not been the first language. Coping with a second language limits the capabilities you may have in your first language to hide behind the flash work of oratory. You are forced to state your case simply and argue it closely, point by point. If then it is done so elegantly, economically and intellectually pleasingly, as by Xiao, it does make refreshing reading. – This, by the way, is the reason why I prefer to read German philosophers in their English translations.