Server Move in Progress. Possible downtime and temporary eccentricities.
The Journal of Things We Like (Lots)
Select Page

Democracy Unchained

K. Sabeel Rahman, Private Power, Public Values: Regulating Social Infrastructure in a Changing Economy, 39 Cardozo L. Rev. 5 (forthcoming, 2017), available at SSRN.

In the mid-2000s, digital activists spearheaded the net neutrality movement to ensure fair treatment of the customers of Internet Service Providers (ISPs), as well as to protect the companies trying to reach them. Net neutrality rules limit or ban preferential treatment; for example, they might prevent an ISP like Comcast from offering exclusive access to Facebook and its partner sites on a “Free Basics” plan. Such rules have a sad and tortuous history in the US: rebuffed under Bush, long delayed and finally adopted by Obama’s FCC, and now in mortal peril thanks to Donald Trump’s elevation of Ajit Pai to be chairman of the Commission. But net neutrality as a popular principle has had more success, animating mass protests and even comedy shows. It has also given long-suffering cable customers a way of politicizing their personal struggles with haughty monopolies.

But net neutrality activists missed two key opportunities. They often failed to explain how far the neutrality principle should extend, as digital behemoths like Google, Facebook, Apple, Microsoft, and Amazon wielded extraordinary power over key nodes of the net. Some commentators derided calls for “search neutrality” or “app store neutrality;” others saw such measures as logical next steps for a digital New Deal. Moreover, they did not adequately address key economic arguments. Neoliberal commentators insisted that the US would only see rapid advances in speed and quality of service if ISPs could recoup investment by better monetizing traffic. Progressives argued that “something is better than nothing;” a program like “Free Basics” probably benefits the disadvantaged more than no access at all.

In his Private Power, Public Values: Regulating Social Infrastructure in a Changing Economy, K. Sabeel Rahman offers a theoretical framework to address these concerns. He offers a “definition of infrastructural goods and services” and a “toolkit of public utility-inspired regulatory strategies” that is a way to “diagnose and respond to new forms of private power in a changing economy,” including powerful internet platforms. He also gives a clear sense of why the public interest in regulating large internet firms should trump investors’ arguments for untrammeled rights to profits—and demands “public options” for those unable to afford access to privately controlled infrastructure.

Law’s treatment of infrastructure has been primarily economic in orientation. For example, Brett Frischmann’s magnum opus, Infrastructure: The Social Value of Shared Resources, offered a sophisticated theory of the spillover benefits of transportation, communication, environmental, and other forms of infrastructure, building on economists’ analyses of topics like externalities and congestion costs. Rahman complements this work by highlighting political and moral dimensions of infrastructure. The early 21st century Progressive movement did not seek to regulate utilities simply because a large firm may not be efficient. They also worried directly about the power exercised by such firms: their ability to influence politicians, take an outsized share of GDP, and sandbag both rival firms and political opponents. As Rahman explains, “Industries triggered public utility regulation when there was a combination of economies of scale limiting ordinary accountability through market competition, and a moral or social importance that made the industries too vital to be left to the whims of the market or the control of a handful of private actors.”

Identifying the list of “foundational goods and services” meriting direct utility regulation is inevitably a mix of politics, science, and law. Determining, for example, whether broadband internet should be treated in a manner similar to telephone service, depends on scientific analysis (e.g., might it soon become easier to provide internet over electric lines to complement existing cable), political mandates (e.g., voters electing Republicans at this point may be assumed not to prioritize broadband regulation, as party lines on the issue are relatively clear), and legal judgments (e.g., is broadband so similar to wireline service that it would defeat the purpose of the relevant statutes to treat it far differently). This delicate balance of the “three cultures” of science, democracy, and law, means that the scope of utilities regulation will always be somewhat in flux. While the federal government is, today, chipping away at the category, future administrations may revive and expand it. If so, they will benefit from Rahman’s rigorous definition of infrastructure as “those goods and services which (i) have scale effects in their production or provision suggesting the need for some degree of market or firm concentration; (ii) unlock and enable a wide variety of downstream economic and social activities for those with access to the good or service; and (iii) place users in a position of potential subordination, exploitation, or vulnerability if their access to these goods or services is curtailed.”

Not just the scope, but also the content of public utility regulation has also evolved over time. As Rahman relates, three broad categories of regulation can provide a “21st century framework for public utility regulation:”

1) [F]irewalling core necessities away from behaviors and practices that might contaminate the basic provision of these goods and services—including through structural limits on the corporate organization and form of firms that provide infrastructural goods;

2) [I]mposing public obligations on infrastructural firms, whether negative obligations to prevent discrimination or unfair disparities in prices, or positive obligations to pro-actively provide equal, affordable, and accessible services to under-served constituencies; and

3) [C]reating public options, state-chartered, cheaper, basic versions of these services that would offer an alternative to exploitative private control in markets otherwise immune to competitive pressures.

These three approaches (“firewalls”, “public obligations” and “public options”) have all helped increase the accountability of private powers in the past (as Robert Lee Hale’s work, praised as an inspiration in Rahman’s, has shown). Cable firms cannot charge you a higher rate because they dislike your politics. Nor can they squeeze businesses that they want to purchase, charging higher and higher rates to an acquisition target until it relents. Nor should regulators look kindly on holding companies that would more ruthlessly financialize essential services (or the horizontal shareholding that functions similarly to such holding companies.).

There are many legal scholars working in fields like communications law, banking law, and cyberlaw, who identify the limits of dominant regulatory approaches, but are researching in isolation. Rahman’s article provides a unifying framework for them to learn from one another, and should catalyze important interdisciplinary work. For example, it is well past time for those writing about search engines to explore how principles of net neutrality could translate into robust principles of search neutrality. The European Commission has documented Google’s abuse of its dominant position in shopping services. Subsequent remedial actions should provide many opportunities for the imposition of public obligations (such as commitments to display at least some non-Google-owned properties prominently in contested search engine results pages) and firewalling (which might involve stricter merger review when a megafirm makes yet another acquisition).

Rahman also shows a critical complementarity between competition law and public utility regulation. Antitrust concepts can help policymakers assess when a field has become concentrated enough to merit regulatory attention. Both judgments and settlements arising out of particular cases could inform the work of, say, a future “Federal Search Commission,” which could complement the Federal Communications Commission. The same problem of “bigness” that can allow a megafirm to abuse its platform by squeezing rivals, also creates opportunities to abuse users. Just as the Consumer Financial Protection Bureau serves a vital function

Many large internet platforms are now leveraging data advantage into profits, and profits into further domination of advertising markets. The dynamic is self-reinforcing: more data means providing better, more targeted services, which in turn attracts a larger customer base, which offers even more opportunities to collect data. Once a critical mass of users is locked in, the dominant platform can chisel away at both consumer and producer surplus. For example, under pressure from investors to decrease its operating losses, Uber has increased its cut from drivers’ earnings and has price discriminated against certain riders based on algorithmic assessments of their ability and willingness to pay. The same model is now undermining Google’s utility (as ads crowd out other information), and Facebook’s privacy policies (which get more egregiously one-sided the more the social network’s domination expands).

Rahman offers us a rigorous way of recognizing such platform power, offering a tour de force distillation of cutting edge social science and critical algorithm studies. Industries ranging from internet advertising to health care could benefit from a public utility-centered approach. This is work that could lead to fundamental reassessments of contemporary regulatory approaches. It is exactly the type of research that state, federal, and international authorities should consult as they try to rein in the power of many massive firms in our increasingly concentrated, winner-take-all economy.

Cite as: Frank Pasquale, Democracy Unchained, JOTWELL (August 17, 2017) (reviewing K. Sabeel Rahman, Private Power, Public Values: Regulating Social Infrastructure in a Changing Economy, 39 Cardozo L. Rev. 5 (forthcoming, 2017), available at SSRN), https://cyber.jotwell.com/democracy-unchained/.

Disruptive Platforms

Orly Lobel, The Law of the Platform, 101 Minn. L. Rev. 87 (2016), available at SSRN.

Until recently, the law of the online platform involved intermediary liability for online content and safe harbors like CDA §230 or DMCA §512. The recent rise of online service platforms, a/k/a the “Uberization of everything,” has challenged this model. What Orly Lobel calls the “platform economy”—which includes the delivery of services (see Task Rabbit), the sharing of assets (see Airbnb), and more—has led to new laws, doctrinal adjustments, and big questions. What happens when the internet meets the localized, physical world? Are these platforms newly disruptive, or old issues in new wrapping? And how do we best design regulations for technological change? The Law of the Platform will appeal to those looking for thoughtful discussion of these questions. It will also appeal, more practically, to those searching for an encyclopedic overview of the fast-developing law in this area, from permitting requirements to employment law to zoning.

Lobel argues that the platform economy represents the “third generation of the Internet”: built on online platforms, but affecting offline service markets. Unlike the first generation of the Web, which connected us to information through search engines, or the second generation, which disrupted publishing, news, music, and retail, the third generation is characterized by “transforming the service economy, allowing greater access to offline exchanges for lower prices.” The platforms do not themselves own the physical assets or hire the labor to which they provide access. Instead, they sell access and information—and desperately try to avoid labels like “employer” or “bank” that might lead to regulation. Lobel maps a number of these digital platforms to their physical world counterparts: Airbnb and VRBO to hotels; Parking Panda to parking sites; Uber and Lyft to taxis; and EatWith to restaurants.

Lobel’s take on these platforms is largely positive. She sees the platform economy as lowering transaction costs and leading to “the market… perfecting.” To share just several of the characteristics Lobel observes: the platform economy creates economies of scale, connecting individuals in huge marketplaces. It reduces waste, and allows the more efficient use of privately owned resources. It allows both supply and demand to be broken down into smaller parts, facilitating smaller exchanges. It allows hyper-customization—you can now rent a “non-smoking, pet-friendly, Kosher, and partially furnished apartment for three nights in a specific neighborhood.” The platform economy reduces intermediation, getting rid of the middleman and thereby lowering costs. And importantly to Lobel, the dynamic ratings that platforms provide can reduce search costs and monitoring costs by providing incentives for good behavior by participants. Coase explained that high transaction costs would in real life prevent many transactions from occurring, but according to Lobel, the logic, technology, and networks of trust that new platforms bring to bear can and do enable these previously lost transactions.

Lobel thus appears in many ways to be a platform optimist. There are indications, however, that such optimism might not be warranted. Uber lost $2 billion in 2015 and $2.8 billion in 2016, subsidizing both sides of transactions to hook drivers with bonuses and riders with cheaper rides. A transportation industry analyst estimated in November 2016 that Uber was covering 60 percent of the cost of each ride. The picture painted by these numbers does not suggest a company that is “the concept of supply and demand embodied,” but rather a behemoth using significant venture capital resources to establish market dominance.

This brings us to the second half of Lobel’s article, on regulation. Lobel asks whether new platforms are successful “because they are introducing new business models… or because they seek regulatory avoidance and generate value from such avoidance.” Again, she seems to side with the platforms, characterizing them as both perfecting existing markets (through competition) and creating new ones (through differentiation). VRBO, Airbnb, and Homeaway are not just substitutes for a hotel, but create a differentiated experience of adventuring at private homes. An Airbnb study in California found that fourteen percent of customers would not have visited San Francisco at all but for an Airbnb stay. And because the rentals are cheaper than hotels, people stay longer and spend more in the local economy. Lobel seems largely convinced that these platforms don’t just lower costs in existing markets, but create new markets as well.

But the billion dollar question (or in Uber’s case, $68 billion) is: are these platforms able to create these new markets because of innovation, or are they lowering costs by cleverly bypassing necessary regulatory regimes? What makes the platform economy legally disruptive is that these companies tend not to fit neatly into existing legal categories in regulated areas, like “employer” or “lender” or “bank.” Whether this is because of the law’s failure to keep pace with technological changes or these companies’ deliberate strategies to evade high-cost regulatory compliance through “sharewashing” is debatable. Back in March, the New York Times disclosed that Uber deliberately tagged and evaded enforcement authorities in Portland, OR; Boston; Paris; Las Vegas; and more. The DOJ is now investigating. But as Lobel points out, some attempts at regulation, like New York City’s taxicab medallion system, seem clearly geared towards protecting incumbents and keeping new actors out.

The middle third of the article taxonomizes the differences between illegitimately protectionist regulation and legitimate regulatory goals and regimes. Lobel divides platform regulations into three categories: (1) permitting, licensing, and price controls; (2) taxation; and (3) broadly speaking, “regulations that are about fairness, externalities, and normative preferences.” Lobel breezes through the tax issues, explaining that questions of collection are “largely technical” and platform providers should be responsible for tax collection for efficiency reasons. In contrast, Lobel characterizes regulations in the first category—permitting, occupational licensing, and price controls—as largely the result of industry capture, where incumbents extract rent at the expense of consumers and competitors (presumably, she’s not a fan of the bar). She argues that we should more directly regulate towards the goals these systems are designed to get at—safety, professionalism, and other forms of consumer protection—rather than using ex ante systems that favor incumbents.

The hardest cases, Lobel argues, are those that revolve around issues of “public welfare in the platform,” such as governing the characteristics and safety levels of particular neighborhoods (zoning) or protecting workers’ rights (employment laws). Her nuanced analysis of zoning regulations calls for empirical evaluation of the safety impact of short-term housing on residential neighborhoods. Her discussion of employment law makes two important observations: one, that the rise of the contingent workforce is not a feature of platforms alone; and two, that the resulting employment law issues—whether an employee is a covered employee or independent contractor—also arise in cases having nothing to do with the platform economy (eg, FedEx in the Ninth Circuit).

In other words, the legal disruption in these areas may have as much to do with the law itself, with older categories that are now breaking down in a number of areas, as with particular disruptive features of the platform economy. Solving these problems requires balancing competing social values, such as fairness with freedom of contract. “The platform provides new opportunities to continue these debates, but it does not transform or transcend these hard choices in any meaningful way.”

The last third of the article ventures into more dangerous territory. Lobel has previously done important work on the relationship between public regulation and private (or public-private) governance. She closes The Law of the Platform by returning to this topic. Where traditional regulation fails, Lobel argues, platforms themselves can through private “regulation” ensure consumer trust and a certain degree of consumer protection. Platforms do this by obtaining insurance, by voluntarily running background checks, and through rating and recording systems that track all transactions on a platform. It is this last form of governance that most excites Lobel, and most worries me.

“The confidence generated by state permitting, occupational licensing, and other regulatory requirements is substitutable with crowd confidence,” Lobel claims. Consumer review systems, Lobel proposes, now serve as a type of governance, forcing transparency better than a command-and-control public regulatory regime. “[W]atchdogging is crowdsourced,” she states. Constant data-gathering means prices will stay updated, and bad actors will quickly be uncovered, protecting consumers and ensuring their trust.

Unfortunately, Lobel does not discuss the downsides of ubiquitous data collection, from creating or exacerbating power disparities, to chilling positive behaviors in addition to negative behaviors, to the economic consequences of hacking. She does not address significant governance concerns—over transparency, discrimination, and self-serving behavior—that come from having this data housed in private, not regulatory or public, hands. And she does not discuss the economic or normative costs of business models formed on selling that privately gathered data back to government for a range of purposes, from infrastructure improvement to government surveillance.

The article closes with a general paean to dynamic and experimental governance as a better approach than command-and-control rule-making and enforcement. Experimentation (for example, in different localities) and data-gathering in the name of anti-discrimination policies are all well and good, but again there are costs to a more universal shift to softer enforcement that Lobel does not address here. Companies are often inspired to self-regulate because of a background threat of harsher government enforcement. The risk in a larger move towards soft self-governance over government regulation in the area of technological development is that consumer concerns will take a decided backseat under that kind of a regime.

The Law of the Platform is rich, complicated, and raises many questions. Lobel does romanticize the platform, even as she acknowledges public welfare issues. She also romanticizes a lighter regulatory touch in the area of technological development, even while recognizing the legitimacy of a number of consumer concerns. But her discussions throughout of legal disruption and regulatory design make this a piece well worth reading for anyone following changes to technology and the law.

Cite as: Margot Kaminski, Disruptive Platforms, JOTWELL (July 19, 2017) (reviewing Orly Lobel, The Law of the Platform, 101 Minn. L. Rev. 87 (2016), available at SSRN), https://cyber.jotwell.com/disruptive-platforms/.

Inspecting Big Data’s Warheads

“Welcome to the dark side of Big Data,” growls the last line of the first chapter of Cathy O’Neil’s recent book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. As that sentence (and that subtitle) suggest, this is not a subtle book. O’Neil chronicles harms from the widespread use of machine learning and other big data systems in our society. O’Neil is convinced that something ominous and harmful is afoot, and she lays out a bill of particulars listing dozens of convincing examples.

This is a book that I like (lots) because we need outspoken and authoritative chroniclers of the downsides of big data decisionmaking. It advances a carefully articulated and well-supported argument, delivered with urgency and passion. For readers yearning for a balanced look at both the benefits and the costs of our increasingly automated society, however, keep searching.

If we built a prototype for a qualified critic of big data, her background would look a lot like O’Neil’s: Harvard math PhD, MIT postdoc, Barnard professor, hedge fund quant during the financial crisis, start-up data scientist. Throw in blogger (mathbabe.org) and Occupy organizer for good measure, and you cannot quibble with the credentials. O’Neil is an author who knows what she is talking about, who also happens to be a writer of compelling, clear prose, an evidently skilled interviewer, and a great speaker.

Perhaps most importantly, the book provides legal scholars with a concise and salient label—weapons of math destruction, or WMDs—to describe decisionmaking algorithms possessing three features: opacity, scale, and harm. This label and three-factor test can help us identify and call out particularly worrisome forms of automated decisionmaking.

For example, she seems to worry most—and have the most to say—about so-called “value added modeling” systems for assessing the effectiveness of teachers in public schools. Reformers such as Michelle Rhee, former Chancellor of the DC public schools, spurred by policies such as No Child Left Behind, embraced a data-centric model, which selected which teachers to fire based heavily on the test scores of their students. The affected teachers had little visibility into the magic formulae that decided their fate (opacity); these tests affected thousands of teachers around the country (scale); and good teachers were released from important jobs they loved, depriving their students of their talents (harm). When opacity, scale, and harm align in an algorithmic decisionmaking system, software can worsen inequality and ruin lives.

Building on these factors, O’Neil returns repeatedly to the important role of feedback in exacerbating (and sometimes blunting) the harm of WMDs. If we use the test results of students to identify topics they are not learning, to change what or how we are teaching, this is a positive and virtuous feedback loop, not a WMD. But when we decide to fire the bottom five percent of teachers based on those same scores, we are assuming the validity and accuracy of the test, making it impossible to use feedback to test the strength of those assumptions. The critical role of feedback is an important key insight of the book.

The book brims with other examples of WMDs, devoting considerable attention to criminal recidivism scoring systems, employment screening programs, predictive policing algorithms, and even the U.S. News college ranking formula. O’Neil spends entire chapters covering big data systems that stand in our way of getting a job, succeeding at work, buying insurance, and securing credit.

Legal scholars who write about automated decisionmaking or artificial intelligence may be surprised to see this book reviewed in these pages. O’Neil’s book is long on description with very little attention paid to policy solutions. A book of deep legal scholarship, this is not. As capably as she writes about math and algorithms, O’Neil falters—and I’m guessing she would cop to this—when it comes to law and regulation, mixing equal parts unrealistically optimistic sentiments about laws like FCRA; vague descriptions about the prospect of Constitutional challenges to data practices; and unrealistic calls for new legislation.

Despite these extra-disciplinary shortcomings, this book should be read by legal scholars, who are not likely to already know all the stories in this book and who will find many compelling (if chilling) examples to cite. As one who does not focus on education policy, for example, I was struck by the detailed and personal stories of teachers fired because of the whims of value-added modeling. And even for the old stories I had heard before, I was struck by how well O’Neil tells them, distilling complicated mathematical concepts into easy-to-digest descriptions and using metaphor and analogy with great skill. I will never again think of a model without thinking of O’Neil’s lovely example of the model she uses to select what to cook for dinner for her children.

The book is in parts intemperate. But we live in intemperate times, and the problems with big data call for an intemperate call-to-arms. A more measured book, one which tried to mete out praise and criticism for big data in equal measure, would not have served the same purpose. This book is a counterpoint to the ceaseless big data triumphalism trumpeted by powerful partisans, from Google to the NSA to the U.S. Chamber, who view unfettered and unexamined algorithmic decisionmaking as their entitlement and who view criticism of big data’s promise as an existential threat. It responds as well to big data’s academic cheerleaders, who spread the word about the unalloyed wonderful potential for big data to drive innovation, grow the economy, and save the world. A milquetoast response would have been drowned out by these cheery tales, or worse, co-opted by them.

“See,” big data’s apologists would have exclaimed, “even Cathy O’Neil agrees about big data’s important benefits.” O’Neil is too smart to have written a book that could have been co-opted in this way. “Big Data has plenty of evangelists, but I’m not one of them,” O’Neil proudly proclaims. Neither am I, and I’m glad that we have a thinker and writer like O’Neil shining a light on some of the worst examples of the technological futures we are building.

Cite as: Paul Ohm, Inspecting Big Data’s Warheads, JOTWELL (June 20, 2017) (reviewing Cathy O'Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, 2016), https://cyber.jotwell.com/inspecting-big-datas-warheads/.

Starting with Consent

James Grimmelmann, Consenting to Computer Use, 84 Geo. Wash. L. Rev. 1500 (2016), available at SSRN.

The Computer Fraud and Abuse Act (“CFAA”), enacted in 1986, has long been a source of consternation for jurists and legal scholars alike. A statute marred by long-standing circuit splits over basic terminology and definitions, the CFAA has strained under the weight of technological evolution. Despite thousands of pages of law review ink spilt on attempting to theoretically resuscitate this necessary but flawed statute, the CFAA increasingly appears to be broken. Something more than a minor Congressional correction is required.

In particular, the central term of the statute—authorization—is not statutorily defined. As the CFAA has morphed through amendments to encompass not only criminal but also civil conduct, the meaning of “authorized access” has become progressively more slippery and difficult to anticipate. Legal scholarship has long voiced concerns over the CFAA, including whether certain provisions are void for vagueness,1 create opportunity for abuse of prosecutorial discretion,2) and give rise to unintended negative impacts on employee mobility and innovation.3

Enter James Grimmelmann’s Consenting to Computer Use. In this work, Grimmelmann offers us a clean slate as an important and useful starting point for the next generation of the CFAA conversation. He returns us to a first-principles analysis with respect to computer intrusion, focusing on the fundamental question of consent.

Grimmelmann urges us to take a step back and hit reset on the scholarly CFAA conversation. In lieu of tortured attempts to find Congressional meaning for “authorization” in legislative history, or misguidedly trying to shoe-horn computer intrusion into last-generation (criminal or civil) trespass regimes, Grimmelmann leads us through an intuitively resonant inquiry around consent. As Grimmelmann succinctly puts it, “[q]uestions of the form, ‘Does the CFAA prohibit or allow X?’ are posed at the wrong level of abstraction. The issue is not whether X is allowed, but whether X is allowed by the computer’s owner.” (P. 1501.)

An inquiry into implicit or explicit consent by a computer’s owner is present in every computer intrusion inquiry, Grimmelmann explains. He reminds us of the importance of the context of the intrusion. Herein lies the primary insight of the paper: the CFAA’s key term requires construction rather than interpretation. In other words, Grimmelmann acknowledges and embraces the suboptimal statutory reality that most other scholars have danced around: the CFAA itself is of little assistance in crafting workable legal analysis for defining computer intrusion and unauthorized access. The starting point for understanding the legal concept of CFAA “authorization” (or lack thereof), Grimmelmann argues, will be found in engaging with the traditional legal concept of consent. He explains that when we begin to rely on consent as the baseline of future CFAA inquiry, courts can then engage with crafting rules in light of the overall goals of the CFAA and the facts of specific cases.

The CFAA context is challenging, and Grimmelmann acknowledges key differences between technological contexts and more traditional ones. Grimmelmann explains that software is automated and plastic—meaning that consent to access is necessarily prospective, and that software can function in unforeseeable ways. These features (bugs?) have added to the complexity of the computer intrusion inquiry. However, when a legal paradigm is constructed around consent, Grimmelmann argues, these elements of automation and plasticity become less dispositive. Providing the example of a compromised vending machine, he explains that it makes no difference whether an intruder tricked the machine by exploiting a hole in the machine’s logic or whether the intruder punched a hole in its side. The issue is the compromise and the lack of consent.

Grimmelmann distinguishes between factual consent and legal consent as distinct concepts, relying on theoretical work from Peter Westen. As Grimmelmann explains the distinction, “factual consent is a function of both code and words; of how a computer is programmed and of its owner’s expressions, such as oral instructions, terms of service, and employee handbooks.” (P. 1511.) Meanwhile, legal consent is based on factual consent, but can depart from it if a jurisdiction believes “that factual consent is not sufficient to constitute legal consent” or that it is not necessary based on the totality of the circumstances, including whether implicit consent may have been granted. (P. 1512.) Grimmelmann cautions that different types of CFAA cases will necessitate a distinction between factual and legal consent. In other words, “without authorization” for purposes of the CFAA can refer to multiple possible types of conduct because legally sufficient consent has always been constructed by courts across various areas of law and various fact patterns.

With this excellent article, Grimmelmann has set the stage for a new line of CFAA scholarship, one that is better-connected to traditional legal first principles. As technological evolution continues to strain the overall framework of the CFAA, this work opens the door to a more aggressive re-evaluation of the statute in technological context and offers us a possible way forward.


Editor’s Note: James Grimmelmann took no part in the selection or editing of this review.

  1. Orin S. Kerr, Vagueness Challenges to the Computer Fraud and Abuse Act, 94 Minn. L. Rev. 1561 (2010). []
  2. The Vagaries of Vagueness: Rethinking the CFAA as a Problem of Private Nondelegation, 127 Harv. L. Rev. 751, 772 (2013) (“To whatever extent prosecutorial discretion might provide some redeeming amount of government participation in the criminal context, such participation is absent in civil cases between private parties.” []
  3. Andrea M. Matwyshyn, The Law of the Zebra, 28 Berkeley Tech. L.J. 155 (2013). []
Cite as: Andrea Matwyshyn, Starting with Consent, JOTWELL (May 19, 2017) (reviewing James Grimmelmann, Consenting to Computer Use, 84 Geo. Wash. L. Rev. 1500 (2016), available at SSRN), https://cyber.jotwell.com/starting-with-consent/.

Make America Troll Again

There is a theory that Donald Trump does not exist, and that the fictional character of “Donald Trump” was invented by Internet trolls in 2010 to make fun of American politics. At first “Trump” himself was the joke: a grotesque egomaniac with orange skin, a debilitating fear of stairs, and a tenuous grasp on reality. He was a rage face in human form. But then his creators realized that there was something even funnier than “Trump’s” vein-popping, bile-specked tirades against bad hombres and nasty women: the panicked and outraged denunciations he inspired from self-serious defenders of the status quo. “Trump’s” election was the greatest triumph of trolling in human history. It has reduced politics, news, and culture to a non-stop, deplorably epic reaction video.

There is no entry for “Donald Trump” in the index of Whitney Phillips’s 2015 book, This Is Why We Can’t Have Nice Things: Mapping the Relationship between Online Trolling and Mainstream Culture. But this playful, perceptive, and unsettling monograph is an outstanding guidebook to the post-Trump hellscape online trolling has made for us. Or perhaps I should say to the hellscape we have made for ourselves, because Phillips’s thesis is that trolling is inherently bound up with the audiences and antagonists who can’t stop feeding the trolls. Much like Trump, trolls “are born of and fueled by the mainstream world.” (Pp. 168-69.)

This is Why We Can’t Have Nice Things is first and foremost an act of ethnography. Phillips embedded herself in online trolling communities, interviewing participants and following them as their targets and methods evolved over the years. The book strikes an especially good balance: close enough to have real empathy for its subjects’ motivations and worldview, but not so close as to lose critical perspective. It also displays an exceptionally good sense of context: the reporting is grounded in specific trolling communities, but Phillips is careful about situating those communities within large cultural trends, online and off.

There are many kinds of trolls: patent trolls who file suits without warning, commentator trolls who make provocative arguments with a straight face. Phillips focuses on what she calls “subcultural trolls,” who self-identify as part of a community of trolls, set apart from the mainstream, engaged in the anonymous (or pseudonymous) exploitation of others for the lulz. Think /b/ on 4chan, think Anonymous, think AutoAdmit, think alt-right.

Phillips defines “lulz” (a corruption of “LOL” with a sharper edge) as “amusement at other people’s distress.” (P. 27.) A classic example is “RIP trolling”: going to social media memorial pages and leaving messages to shock, confuse, and anger grieving families. Phillips argues that lulz are characterized by fetishism, generativity, and magnetism. “Fetishism” is used in a quasi-Marxist sense of dissociation: RIP trolling, for example, involves an act of emotional detachment that cuts away the actual human tragedy and focuses on extracting humor from arbitrary details, like a victim’s lost iPod. “Generativity” refers to the same kind of playful remixing, repurposing, and world-building that online fanfic communities engage in. And “magnetism” captures lulz’ memetic qualities: they draw attention in and allow a trolling community to cohere around iterated themes and phrases.

The heart of the book (Part II), with examples drawn roughly from 2008 to 2011, is a sustained argument against being too quick to treat trolls as the Other. Trolls take expert advantage of mainstream media attention. Their tactics are often straight out of the corporate PR playbook and its even more unsavory cousins, and their cultural postures are funhouse-mirror reflections of attitudes that are prevalent in mainstream culture. (Breitbart, in other words, is a professionalized political trolling operation—or perhaps it would be more accurate to say that it is a news organization genetically enhanced with troll DNA.) “[T]rolls and sensationalist corporate media outlets are in fact locked in a cybernetic feedback loop predicated on spectacle,” Phillips writes. (P. 52.)

Trolls thrive on mainstream media attention in two related ways. One is the classic hoax, updated for the Internet age. Some trolls are masters at feeding the mainstream media false stories (fake news!). Multiple local TV stations fell for troll-supplied stories about a supposed crisis sweeping through the United States of teenagers huffing jenkem (a fermented mixture of feces and urine). The other is that trolls are skilled at turning attention into a game only they can win. Resistance is futile; one cannot argue with a sea lion or reason with the Joker. In this, Phillips argues, trolls channel Schopenhauer. The point is to win the argument by any means necessary, right or wrong. (If the technique sounds familiar, it may be because you’ve seen it coming from the talking heads on Fox News or from behind the podium at the White House Press Briefing Room.)

Aspects of trolling are rooted in widely shared mainstream attitudes. It draws heavily on a muscular strain of free speech libertarianism that shields even the most offensive speech. If you don’t like what I’m saying, it’s your own damn fault for listening, or for being bothered by it. If you don’t want your feelings to be hurt, don’t have feelings; if you don’t like death threats, just kill yourself. Phillips does a nice job tracing trolling’s complicated relationship with race, gender, and sexuality: the same trolls—the same trolling campaign—can enjoy lulz at the expense of vulnerable minorities, privileged white middle-class comfort, conservative intolerance, and liberal pieties. Making racist jokes is both something that many millions of Americans routinely indulge in and something that makes many millions of Americans (not usually the same ones) really angry.

Trolling eats everything, including especially itself, and reduces it all to a pulsing blob of incoherent imagery, held together only by the pleasure of a laugh at the expense of someone who can’t take the joke. Indeed, there is no other joke; trolling is bullying, or dominance politics from which everything but the lulz has been stripped away. Phillips calls it “pure privilege,” and explains that trolls “refuse to treat others as they wish to be treated. Instead, they do what they want, when they want, to whomever they want, with almost perfect impunity.” (P. 26.)

But, to repeat, trolls “aren’t pulling their materials, chosen targets, or impulses from the ether. They are born of and fueled by the mainstream world—its behavioral mores, its corporate institutions, its political structures and leaders—however much the mainstream might rankle at the suggestion.” (Pp. 168-69.)

We have met the troll and it me.

Cite as: James Grimmelmann, Make America Troll Again, JOTWELL (April 21, 2017) (reviewing Whitney Phillips, This Is Why We Can't Have Nice Things: Mapping the Relationship between Online Trolling and Mainstream Culture (2016)), https://cyber.jotwell.com/make-america-troll-again/.

Back to the Essentials

Michael Buckland, Information and Society (The MIT Press Essential Knowledge Series, 2017).

Judging from its title, Professor Michael Buckland‘s book seems to be yet another introduction into the relationship between information and society. Upon reading it you encounter a well-organized, simply but not simplistically written concise introduction enriched by historical references to what was once called library science and is now more often referred to as (non-mathematical) information science.

As such, it fits well into the MIT Press series that has brought us among others John Palfrey’s Intellectual Property Strategy or Samuel Greengard’s The Internet of Things.

Buckland guides us through the various dimensions of information, such as physical characteristics, formal elements, meaning, use, the infrastructure necessary for its use and most of all its cultural dependencies. He uses the passport as an instructive example and introduces the term document to make the various informational perspectives more present. Further chapters deal with organization, naming, description and retrieval techniques for documents and their possible evaluations.

All this brought me back to my own beginnings when, at our research institute in the late 1970s, we were building a metadata system for mainly European publications in the budding discipline of what was then called “Computers and Law.” I still think there is no better exercise to enter a new field of knowledge than to develop and systematize descriptors. But it is not nostalgia that makes me introduce Buckland’s book here as a thing “we like (lots).”

Buckland’s tour through the essentials of information handling—also because of its clear and mind-refreshing language—opens a new perspective on cyberlaw. The book invites us to take a step back from ever-changing technological characteristics, regulatory reactions, and accumulating caselaw and to take a fresh look at what all this is about, at how our societies create, handle, organize, share and restrict information and at how all this should be done considering our constitutional value systems—in short, to look at information law properly and then from there to discuss and evaluate the implications of technological change.

Buckland’s remarks on “The Past and the Future” are a good example for this insight. Among other observations he states (P.173) ” … there is a shift from individuals deriving benefit from the use of documents to documentary regimes seeking to influence control and benefit from individuals.” What he is pointing to here, in highly unobtrusive language, is one of the core issues of cyberlaw—the power shifts in information handling. The book is rich with such windows for a fresh look on what are the fundamentals of cyberlaw, such as his frequent references to the important role of trust systems in communication.

And—last but not least—it should be added—as others have noted before on this series (for example, Nasrine Olson’s book review at 18 New Media & Society 680 (2016).): The books of this series are a nice handy size, feel good to the touch, and have typography gentle to the eyes. Also, such things count when we like things—even more now when we look at screens rather more often than on paper pages. But I am getting nostalgic again …

Cite as: Herbert Burkert, Back to the Essentials, JOTWELL (March 24, 2017) (reviewing Michael Buckland, Information and Society (The MIT Press Essential Knowledge Series, 2017)), https://cyber.jotwell.com/back-to-the-essentials/.

Could There Be Free Speech for Electronic Sheep?

Toni M. Massaro, Helen L. Norton & Margot E. Kaminski, Siri-ously 2.0: What Artificial Intelligence Reveals about the First Amendment, Minn. L. Rev. (forthcoming 2017), available at SSRN.

The goal of “Strong Artificial Intelligence” (hereinafter “strong AI”) is to develop artificial intelligence that can imbue a machine with intellectual capabilities that are functionally equivalent to those possessed by humans. As machines such as robots become more like humans, the possibility that laws intended to mediate the behaviors of humans will be applied to machines grows.

In this article the three authors assert that the First Amendment may protect speech by strong AI. It is a claim, the authors state in their abstract, “that discussing AI speech sheds light on key features of prevailing First Amendment doctrine and theory, including the surprising lack of humanness at its core.” And it is premised on an understanding of a First Amendment which “increasingly focuses not on protecting speakers as speakers but instead on providing value to listeners and constraining the government.”

The first substantive section of the article considers justifications for free speech rights for AI speakers, both positive and negative. Positive justifications embrace the potential usefulness of AI speech to human listeners. According to the authors, AI speech can contribute to human meaning-making and construction of selfhood, and can produce the sorts of ideas and information that can lead to human enlightenment. Negative justifications for free speech rights for AI speakers reflect views which deeply distrusts governmental regulation of speech. The Supreme Court has broadened its views of free speech protection in part based on its doubts about the government’s ability to competently balance social costs and benefits pertaining to speech, especially when driven by censorial motives. The authors conclude that whether it is providing benefits to humans or remaining free from government constraints, AI speech can reasonably be treated like human speech under most existing First Amendment principles and practices, because humanness of the speaker is neither a stated not implied criteria necessary for speech protection. The only exceptions are theories of the First Amendment which are explicitly predicated on the value that free speech has for humans.

The second section of the article explains in more detail that First Amendment law and doctrine are largely inattentive to the humanness of speakers. It contains the observation that corporations famously receive speech protection, rebutting any presumption that innate or prima facie humanness matters to First Amendment rights, even though human autonomy and dignity are values free speech is intended to protect. Humans may need to be part of the equation, but having them as background beneficiaries maybe enough for the First Amendment to attach. The authors further argue that strong AI may in the future be credited with sufficient indicia of personhood to warrant inclusion even in speaker-focused speech protections.

Next the authors discuss whether possessing human emotions is or should be a prerequisite for a speaker to claim First Amendment protection. Not surprisingly, they conclude that AI is growing increasingly affective, while free speech laws ignore emotions, protecting cruel, nasty, racist, sexist and homophobic speech regardless of the emotional damage it might inflict. They repeat the point about corporations having cognizable speech rights, and remind readers that the two key concerns of contemporary free speech jurisprudence are whether the speech potentially has utility, and whether the speech is something the government has no right to silence. If the answer to either question is yes, the speech is protected.

The authors then contemplate whether the speech of other nonhuman speakers such as animals could be ascribed First Amendment protection, once the slippery slope of AI speech protections is sufficiently iced. No, they conclude that unlike AI, animals are not intended to serve human informational needs like computers are. This section of the article gave me a flashback to my law school Evidence class, in which I learned that animals cannot lawfully be declarants nor can their speech constitute hearsay. I’ve since seen and read many legal dramas that flout this well-established legal principle. I suspect this is because of an assumption that audience members like it when animals testify in court enough to forgo accuracy. Animals seem inherently honest. AI beings like robots probably evoke more mixed reactions because of the range of ways they are depicted in popular culture. Commander Data from Star Trek: The Next Generation always seemed trustworthy, but HAL 9000 from 2001: A Space Odyssey will kill you.

The authors then discuss doctrinal and practical objections to First Amendment protection of AI speech. Courts might find a way around the fact that AI speakers cannot be said to have culpable mental states when evaluating and ruling on defamation claims. Judges could, for example, treat AI speakers as dependent legal persons or find another way to facilitate litigation in which an AI speaker is the plaintiff or defendant. Should an AI speaker be found liable, it could be unplugged.

The fourth section of the article looks at what the limits of AI speech protection might be. Free speech protection is already quite expansive, say the authors, but there might be a way to formulate limiting principles including outright regulation that apply only to the unique challenges posed by AI speech. This claim puzzled me a little, because it seems to pull in the direction of content-based distinctions. The offered analogies to regulation of commercial speech, and to professionals’ speech to patients and clients are only partly reassuring. Regulation of commercial speech is a thorny, confusing doctrinal morass, and the authors do not explain why or how courts would do better with AI speech.

Next, the authors note that what AI produces is likely to be characterized as expressive conduct (“or something similar”) rather than pure speech. This raises definitional difficulties not unique to AI in terms of separating speech related motives or interests from activities that can be permissibly regulated.

Finally the authors conclude that legal regimes have always managed to handle emerging technologies and we should expect this to continue with respect to AI speech. There may be a lot of complicated line drawing, but that’s the way it goes in First Amendment jurisprudence.

I enjoyed reading this engaging piece of scholarship very much. It is accessibly written, and the authors’ willingness to generalize about First Amendment law and policy is truly refreshing. Its central claim about the lack of importance of real human beings and their emotions to most free speech theory rings true and has relevance well beyond the strong AI context. The piece confirmed my own beliefs about the current state of free speech, and made me viscerally miss the late C. Edwin Baker, who spent so much time passionately arguing that the central purpose of the First Amendment is the promotion of *human* liberty. He’d have written a far feistier review essay for sure, challenging the authors to be activists who instantiate human liberty interests within the center of the First Amendment. But he would have appreciated the creativity of the article just as I did.

Margot Kaminski took no part in the editing of this review.

Cite as: Ann Bartow, Could There Be Free Speech for Electronic Sheep?, JOTWELL (February 23, 2017) (reviewing Toni M. Massaro, Helen L. Norton & Margot E. Kaminski, Siri-ously 2.0: What Artificial Intelligence Reveals about the First Amendment, Minn. L. Rev. (forthcoming 2017), available at SSRN), https://cyber.jotwell.com/could-there-be-free-speech-for-electronic-sheep/.

What is Cyberlaw, or There and Back Again

Jeanette Hofmann, Christian Katzenbach & Kirsten Gollatz, Between Coordination and Regulation: Finding the Governance in Internet Governance, New Media & Society (2016), available at SSRN.

The concept of “cyberspace” has fascinated legal scholars for roughly 20 years, beginning with Usenet, Bulletin Board Systems, the World Wide Web and other public aspects of the Internet. Cyberspace may be defined as the semantic embodiment of the Internet, but to legal scholars the word “cyberspace” itself initially reified the paradox that the Internet both seemed to be free of law and constituted law, simultaneously. The explorers of cyberspace were like the advance guard of the United Federation of Planets, boldly exploring open, uncharted territory and domesticating it in the interest of the public good. The result was to be both order (of a sort) without law, to paraphrase and re-purpose Robert Ellickson’s work, and law (of a different sort), to distill Lawrence Lessig’s famous exchange with Judge Frank Easterbrook.1 For the last 20 years, more or less, legal scholars have intermittently pursued the resulting project of defining, exploring, and analyzing cyberlaw, but without really resolving this tension, that is, without really identifying the “there” there. Perhaps the best, most engaged, and certainly most optimistic embrace of that point of view is David Post’s In Search of Jefferson’s Moose.

Less speculative and less adventurous cyberlaw scholars, which is to say, most of them, quickly adapted to the seeming hollowness of their project by aligning themselves with existing literatures on governance, a rich and potentially fruitful field of inquiry derived largely from research and policymaking in the modern regulatory state. That material was made both relevant and useful in the Internet context via the emergence of global regulatory systems that speak to the administration of networks, particularly the Domain Name System and ICANN, the institution that was invented to govern it. The essential question of cyberlaw became, and remains: What is Internet governance, and what do we learn about governance in general from our observations and experiences with Internet governance? As an intervention in that ongoing discussion, Between Coordination and Regulation: Finding the Governance in Internet Governance is an especially welcome and clarifying contribution, all the more so because of its relative brevity.

The lead author is the head of the Humboldt Institute for Internet and Society and a veteran observer of and participant in Internet governance dialogues at ICANN and the World Summit on the Information Society (WSIS). She and two colleagues at the Humboldt Institute have produced a useful review of relevant Internet governance literature and a new framework for further research and analysis that is eclectic in its reference to and reliance on existing material and therefore independent of the influence of any single prior theorist or thinker. The resulting framework is both novel yet recognizably derivative of and continuous with respect to earlier work in the field. This is not a work primarily of legal scholarship by legal scholars, but properly understood, it should contribute in important ways to sustaining the ongoing project of cyberlaw. Internet governance is conceptualized here in ways that make clear its relevance and utility to questions of governance generally.

The paper introduces its subject with an overview of the definitional problems associated with the term “governance” and especially the phrase “Internet governance.” In phenomenal terms, the concept often refers to combinations of three things: one, rulemaking and enforcement and associated coordinating behaviors that implicate state actors acting in accordance with established political hierarchies; two, formal and informal non-state actors acting in less coordinated or un-coordinated “bottom up” ways, including through the formation and evolution of social norms; and three, technical protocols and interventions that have human actors as their designers but that have sorts of independent technical agency in enabling and constraining behaviors.

The authors note that many researchers seeking to define and understand relevant combinations equate “governance” with “regulation,” which leads to the implication that governance, like regulation, should be purposive with respect to its domain and that its goals should be evaluated accordingly. They reject that equation, observing that the experience of Internet institutions and other actors, of both legal and socio-technical character, suggests that such a purposive framing of the phenomenon of governance is unhelpfully underinclusive. A large amount of relevant behavior and consequences cannot be traced in purposive terms or in functional terms to planned interventions.

Also rejected, this time on overinclusiveness grounds, is the idea that governance can and should be equated with coordination among actors in a social space, as such. The authors correctly note that if governance is coordination of actors in social life, then virtually any and every social phenomenon is governance, and the concept loses any distinct analytic potential.

In between these two poles of the spectrum—that governance is regulation, or that governance is coordination—the authors settle on the argument that governance is and should be characterized as “reflexive coordination.” They define this concept as follows:

Critical situations occur when different criteria of evaluation and performance come together and actors start redefining the situation in question. Routines are contested, adapted or displaced through practices of articulation and justification. Understanding governance as reflexive coordination elucidates the heterogeneity of sources and means that drive the emergence of ordering structures. (P. 20.)

This approach preserves the role of heterogeneous assemblages of actors, conventions, technologies, purposes, and accidents, while calling additional attention to moments and instances of conflict and dispute, where “routine coordination fails, when the (implicit) expectations of the actors involved collide and contradictory interests or evaluations become visible.” The authors’ point is that this concept, which they refer to as reflexive coordination, or more clearly stated, these processes of reflexive coordination, are specifically aligned with the concept of Internet governance in particular and with governance in general. The reflexivity in question are practices and processes of contestation, conflict, reflection, and resolution that sometimes accompany more ordinary or typical practices and processes of institutional and technical design and activity. Those ordinary or typical practices and processes constitute questions of coordination and/or regulation, broadly conceived. Those are appropriately directed to the Internet, but not under the governance rubric.

The authors acknowledge their debt to a variety of social science research approaches, including Bruno Latour, John Law, Elinor Ostrom, Douglas North, and Oliver Williamson, and to American scholars of law and public policy, notably Michael Froomkin, Milton Mueller, Joel Reidenberg, and Lawrence Lessig, but without resting their case specifically on any one of them or on any particular work. As a student of the subject, I was struck not by the identities of the researchers whose work is cited, but rather by the conceptual affinity between the authors’ concept of “reflexive coordination” and an uncited concept. Recently, in a parallel literature on the anthropology (and dare I say, governance) of open source computer software, Christopher Kelty, now a researcher at UCLA, coined the phrase “recursive public” to describe the attributes of an open source software development collective.2 Kelty writes:

A recursive public is a public that is vitally concerned with the material and practical maintenance and modification of the technical, legal, practical, and conceptual means of its own existence as a public; it is a collective independent of other forms of constituted power and is capable of speaking to existing forms of power through the production of actually existing alternatives. Free Software is one instance of this concept, both as it has emerged in the recent past and as it undergoes transformation and differentiation in the near future.…In any public there inevitably arises a moment when the question of how things are said, who controls the means of communication, or whether each and everyone is being properly heard becomes an issue.… Such publics are not inherently modifiable, but are made so—and maintained—through the practices of participants.3

The extended quotation is offered to suggest that processes of reflexive coordination already resonate in governance domains beyond those associated with the Internet itself. To the extent that reflexive coordination needs affirmation as a generalized model of governance, Kelty’s research on recursive publics offers some useful evidence that the model is useful. Open source software development collectives seem to fit the model of governance quite readily, despite the fact that the concepts of “reflexive coordination” and the “recursive public” arise in different intellectual traditions and for different purposes. The challenges of understanding and practicing Internet governance speak to the challenges of understanding and practicing governance generally. “Between coordination and regulation: Finding the governance in Internet governance” offers a helpful and important step forward in that broader project.

  1. See Frank H. Easterbrook, Cyberspace and the Law of the Horse, 1996 U. Chi. Legal F. 201; Lawrence Lessig, The Law of the Horse: What Cyberlaw Might Teach, 113 Harv. L. Rev. 501 (1999). []
  2. Christopher M. Kelty, Two Bits: The Cultural Significance of Free Software (2008). []
  3. Id. at 3. []
Cite as: Michael Madison, What is Cyberlaw, or There and Back Again, JOTWELL (December 9, 2016) (reviewing Jeanette Hofmann, Christian Katzenbach & Kirsten Gollatz, Between Coordination and Regulation: Finding the Governance in Internet Governance, New Media & Society (2016), available at SSRN), https://cyber.jotwell.com/what-is-cyberlaw-or-there-and-back-again/.

Automatic – for the People?

Andrea Roth, Trial by Machine, 104 Georgetown Law Journal 1245 (2016).

Crucial decision-making functions are constantly migrating from humans to machines. The criminal justice system is no exception. In a recent insightful, eloquent, and rich article, Professor Andrea Roth addresses the growing use of machines and automated processes in this specific context, critiquing the ways these processes are currently implemented. The article concludes by stating that humans and machines must work in concert to achieve ideal outcomes.

Roth’s discussion is premised on a rich historical timeline. The article brings together measures old and new—moving from the polygraph to camera footage, impairment-detection mechanisms such as Breathalyzers, and DNA typing, and concluding with AI recommendation systems of the present and future. The article provides an overall theoretical and doctrinal discussion and demonstrates how these issues evolved. Yet it also shows that as time moves forward, problems often remain the same.

The article’s main analytical contribution is its two central factual assertions: First, that machines and mechanisms are introduced unequally, as a way to strengthen the prosecution and not to exonerate. In other words, there are no similar opportunities to apply these tools to enhance defendants’ cases. Secondly, machines and automated processes are inherently flawed. This double analytic move might bring a famous “Annie Hall” joke to mind: “The food at this place is really terrible . . . and such small portions.”

The article’s first innovative and important claim—regarding the pro-prosecution bias of decisions made via machine—is convincing and powerful. Roth carefully works through technological test cases to show how the state uses automated and mechanical measures to limit “false negatives”—instances in which criminals eventually walk free. Yet when the defense suggests using the same measures to limit “false positives”—the risk that the innocent are convicted—the state pushes back and argues that machines and automated processes are problematic. Legislators and courts would be wise to act upon this critique and consider balancing the usage of automated measures.

Roth’s second argument—automation’s inherent flaws—constitutes an important contribution to a growing literature pointing out the problems of automated processes. The article explains that such processes are often ridden with random errors which are difficult to locate. Furthermore, they are susceptible to manipulation by the machine operators. Roth demonstrates in several contexts how subjective assumptions can be and are buried in code, inaccessible to relevant litigants. Thus, the so-called “objective” automated process in fact introduces unchecked subjected biases of the system’s programmers. Roth further notes that the influence of these biased processes is substantial. Even in instances in which the automated processes are intended to merely recommend an outcome, the humans using it give extensive deference to the automated decision.

The article fairly addresses counter-arguments, noting the virtues of automated processes. Roth explains how automated processes can overcome systematic human error and thus limit false positives in the context of DNA evidence and computer-assisted sentencing. To this I might add that machines allow for replacing decisions made in the periphery of systems with those made by central planners. In many instances, it might be both efficient and fair to prefer systematic errors made by the central authority to the biases arising when rules are applied with discretion in the field and subjected to the many biases of agents.

In addition, Roth explains that automated processes are problematic, as they compromise dignity, equity, and mercy. Roth’s argument that trial by machine compromises dignity is premised on the fact that applying some of these mechanical and automated measures calls for degrading processes and the invasion of the individual’s property.

This dignity-based argument could have been strengthened by a claim often voiced in Europe: to preserve dignity, a human should be subjected to the decision of a fellow human, especially when there is much at stake. Anything short of that will prove to be an insult to the affected individual’s honor. Europeans provide strong legal protections for dignity which are important to mention—especially given the growing influence of EU law (a dynamic at times referred to as the “Brussels Effect”). Article 22 of the recently introduced General Data Protection Regulation (GDPR) provides that individuals have the right not to be subjected to decisions that are “based solely on automated processing” when these are deemed to have a significant effect. Article 22 provides several exceptions, yet individuals must be provided with a right to “obtain human intervention,” and have the ability to contest the automated findings and conduct additional examinations as to how the decision was reached (see also Recital 71 of the GDPR). Similar provisions were featured in Article 12(a) and Article 15 of the Data Protection Directive which the GDPR is set to replace over the next two years, and in older French legislation. To be fair, it is important to note that in some EU Member States these provisions have become dead letters. Their recent inclusion in the GDPR will no doubt revive them. However, the GDPR does not pertain to criminal adjudication.

Roth’s argument regarding equity (or the lack thereof in automated decisions) is premised on the notion that automated processes are unable to exercise moral judgment. Perhaps this is about to change. Scholars are already suggesting the creation of automated tools that will do precisely that. Thus, this might not be a critique of the processes in general, but of the way they are currently implemented—a concern that could be mitigated over time as technology progresses.

The lack of mercy in machine-driven decisions is obviously true. However, the importance of building mercy into our legal systems is debatable. Is the existing system equally merciful to all social segments? One might carefully argue that very often the gift of mercy is yet another privilege of the elites. As I argue elsewhere, automation can remove various benefits the controlling minorities still have—such as the cry for mercy—and this might indeed explain why societies are slow to adopt these measures, given the political power of those to be harmed from its expansion.

To conclude, let’s return to Woody Allen and the “Annie Hall” reference. If, according to Roth, automated processes are problematic, why nonetheless should we complain that the portions are so small, and consider expanding their use to limit “false positives”? Does making both claims make sense? I believe it does. For me and others who are unconvinced that automated processes are indeed problematic (especially given the alternatives) the article both describes a set of problems with automation we must consider, and also provides an alarming demonstration of the injustices unfolding in implementation. But joining these two arguments should also make sense to those already convinced that machine-driven decisions are highly problematic. This is because it is quite clear that machines and automated processes are here to stay. Therefore, it is important both to identify their weaknesses and improve them (at times by integrating human discretion) and to assure that the advantages they provide are equally shared throughout society.

Cite as: Tal Zarsky, Automatic – for the People?, JOTWELL (November 8, 2016) (reviewing Andrea Roth, Trial by Machine, 104 Georgetown Law Journal 1245 (2016)), https://cyber.jotwell.com/automatic-for-the-people/.

What is the Path to Freedom Online? It’s Complicated

Yochai Benkler, Degrees of Freedom, Dimensions of Power, Daedelus (2016).

In recent years, the internet has strengthened the ability of state and corporate actors to control the behavior of end users and developers. How can freedom be preserved in this new era? Yochai Benkler’s recent piece, Degrees of Freedom, Dimensions of Power, is a sharp analysis of the processes that led to this development, which offers guidelines for what can be done to preserve the democratic and creative promise of the internet.

For over two decades the internet was synonymous with freedom, promising a democratic alternative to dysfunctional governments and unjust markets. As a “disruptive technology,” it was believed to be capable of dismantling existing powers, displacing established hierarchies, and shifting power from governments and corporations to end users. These high hopes for participatory democracy and new economic structures have been largely displaced by concerns over the rise of online titans (Facebook, Google, Amazon), mass surveillance and power misuse. The power to control distribution and access no longer resides at the end-nodes. Instead it is increasingly held by a small number of state and corporate players. Governments and businesses harvest personal data from social media, search engines and cloud services, and use it as a powerful tool to enhance their capacities. They also use social media to shape public discourse and govern online crowds. The most vivid illustration of this trend was provided during the recent coup attempt in Turkey, when President Recep Tayyip Erdoǧan used social media to mobilize the people of Turkey to take to the streets and fight against the plotters.

How did we reach this point? Since the 1990s it has been evident that the internet may subvert power. In this article, Benkler explains how power may also shape the internet, and how it creates new points of control.

There are many ways to describe this shift of power. Some versions focus on changes in architecture and the rise of cloud computing and mobile internet. Others emphasize market pressure to optimize efficiency and consumer demands for easy-to-use downloading services.

Benkler draws a multidimensional picture of the forces that destabilized the first generation of decentralized internet. These include control points offered by the technical architecture, such as proprietary portable devices (iPhone, Kindle), operating systems (iOS, Android), app stores and mobile networks. The power shift was also affected by business models such as ad-supported platforms and big data, enabling market players to effectively predict and manipulate individual preferences. The rise of proprietary video streaming (Netflix), and Digital Rights Management (DRM) as a prevailing distribution standard, are further threatening to marginalize open access to culture. What made the internet free, Benkler argues, is the integrated effect of these various dimensions, and it was change in these dimensions “. . . that underwrite the transformation of the Internet into a more effective platform for the reconcentration of power.”

This multidimensional analysis enhances our understanding of power and demonstrates how it may restrain our freedom. Power, defined by Benkler as “the capacity of an entity to alter the behaviors, beliefs, outcomes or configurations of some other entity” is neither good nor evil. Therefore, we should not simply seek to dismantle it, but rather to enable online users to resist it. Consequently, efforts to resist power and secure freedom should focus on interventions that disrupt forms of power as they emerge. This is an ongoing process in which “we must continuously diagnose control points as they emerge and devise mechanisms of recreating diversity of constraint and degrees of freedom in the network to work around these forms of reconcentrated power.”

Power is dynamic and manifests itself in many forms. Consequently, the complex system analyzed by Benkler does not offer instant solutions. There may be no simple path towards achieving freedom in the digital era, but plenty can be done to preserve the democratic and creative promise of the internet. Benkler offers several concrete proposals for interventions of this kind, such as facilitating user-owned and common-based infrastructure that is uncontrolled by the state or the market; universal strong encryption controlled by users; regulation of app stores; or distributed mechanisms for auditing and accountability.

Exploring the different ways in which power is exercised in the online ecosystem may further inform our theory of change. Benkler calls our attention to these virtues of decentralized governance. Decentralized design alone may not secure decentralized power, and may not guarantee freedom. Indeed, if we are concerned about preserving freedom, it is insufficient to simply yearn for decentralization. Yet, decentralized design also reflects an ideology. The “original Internet” was not simply a technical system but also a system of values. It assumes that collective action should be exercised through rough consensus and running codes. That is why decentralization may still matter for online freedom.

The “original Internet” provided hard evidence that loosely governed distributed collective action could actually work, and that it could foster important emancipatory and creative progress. Indeed, the distributed design was instrumental for the booming of innovation and creativity, and for widening political participation of individuals over the past two decades. The fact that some of the forces that shape the internet have deserted it does not undermine these core values.

Benkler warns that “the values of a genuinely open Internet that diffuses and decentralizes power are often underrepresented where the future of power is designed and implemented.” It does not follow, however, that the virtues of distributed systems should be eliminated. He calls on academics to fill this gap by focusing on the challenges to distributed design, diagnosing control points, and devising tools and policies to secure affordances of freedom in years to come.

Cite as: Niva Elkin-Koren, What is the Path to Freedom Online? It’s Complicated, JOTWELL (October 13, 2016) (reviewing Yochai Benkler, Degrees of Freedom, Dimensions of Power, Daedelus (2016)), https://cyber.jotwell.com/what-is-the-path-to-freedom-online-its-complicated/.