Software crashes all the time, and the law does little about it. But as Bryan H. Choi notes in Crashworthy Code, “anticipation has been building that the rules for cyber-physical liability will be different.” (P. 43.) It is one thing for your laptop to eat the latest version of your article, and another for your self-driving lawn mower to run over your foot. The former might not trigger losses of the kind tort law cares about, but the latter seems pretty indistinguishable from physical accidents of yore. Whatever one may think of CDA 230 now, the bargain struck in this country to protect innovation and expression on the internet is by no means the right one for addressing physical harms. Robots may be special, but so are people’s limbs.
In this article, Choi joins the fray of scholars debating what comes next for tort law in the age of embodied software: robots, the internet of things, and self-driving cars. Meticulously researched, legally sharp, and truly interdisciplinary, Crashworthy Code offers a thoughtful way out of the impasse tort law currently faces. While arguing that software is exceptional not in the harms that it causes but in the way that it crashes, Choi refuses to revert to the tropes of libertarianism or protectionism. We can have risk mitigation without killing off innovation, he argues. Tort, it turns out, has done this sort of thing before.
Choi dedicates Part I of the article to the Goldilocksean voices in the current debate. One camp, which Choi labels consumer protectionism, argues that with human drivers out of the loop, companies should pay the cost of accidents caused by autonomous software. Companies are the “least cost avoiders” and the “best risk spreaders.” This argument tends to result in calls for strict liability or no-fault insurance, neither of which Choi believes to be practicable.
Swinging from too hot to too cold, what Choi calls technology protectionism “starts from the opposite premise that it is cyber-physical manufacturers who need safeguarding.” (P. 58.) This camp argues that burdensome liability will prevent valuable innovation. This article is worth reading for the literature review here alone. Choi briskly summarizes numerous calls for immunity from liability, often paired with some version of administrative oversight.
Where Goldilocks’s bears found happiness in the third option, Choi’s third path forward is found wanting. What he calls doctrinal conventionalism takes the view that tort law as-is can handle things. Between negligence and strict products liability, this group argues, tort will figure robots out.
This third way, too, initially seems unsatisfactory. The law may be able to handle technological developments, Choi acknowledges, but the interesting question isn’t whether; it’s how. And crashing code, he argues in Part II, is at least somewhat exceptional. Its uniqueness isn’t in the usual lack of physical injuries, or the need for a safe harbor for innovation. It’s the problem of software complexity that makes ordinary tort frameworks ill-suited for governing code. Choi explains that software complexity makes it impossible for programmers to guarantee a crash-free program. This “very basic property of software…[thus] defies conventional judicial methods of assessing reasonableness.” (P. 79.) No matter how many resources one applied to quality assurance, one could “still emerge with errors so basic a jury would be appalled.” (P. 80.) (Another bonus for the curious: Choi’s discussion of top-down attempts to bypass these problems through mandating particular formal languages in high-stakes fields such as national security and aviation.)
The puzzle, then, isn’t that software now produces physical injuries, thus threatening the existing policy balance between protecting innovation and remediating harm. It’s that these newly physical injuries make visible a characteristic of software that makes it particularly hard to regulate ex post, through lawsuits. In other words, “[s]oftware liability is stuck on crash prevention,” when it should be focused instead on making programmers mitigate risk. (P. 87.)
In Part III, Choi turns to a line of cases in which courts found a way to get industry to increase its efforts at prevention and risk mitigation, without crushing innovation or otherwise shutting companies down. In a series of crashworthiness cases from the 1960s, courts found that car manufacturers were responsible for mitigating injuries in a car crash, even if (a) such crashes were statistically inevitable, and (b) the chain of causation was extremely hard to determine. While an automaker might not be responsible for the crash itself, it could be held liable for failing to make crashing safer.
Crashworthiness doctrine, Choi argues, should be extended from its limited use in the context of vehicular accidents to code. In the software context, he argues that “there are analogous opportunities for cyber-physical manufacturers to use safer designs that can mitigate the effects of a software error between the onset and the end of a code crash event.” (P. 101.) Programmers should be required not to prevent crashes entirely, but to use known methods of fault tolerance, which Choi discusses in detail. Courts applying crashworthiness doctrine to failed software thus would inspect the code to determine whether it used reasonable fault tolerance techniques.
Crashworthy Code could be three articles: one categorizing policy tropes; one identifying what makes software legally exceptional or disruptive; and one discussing tort law’s ability to handle risk mitigation. But what is most delightful about it is Choi’s thoroughness, his refusal to simplify or overreach. There is something truly delicious about finding a solution to a “new” problem in a strain of case law from the 1960s that most people probably don’t know existed. This is what common law is good at: analogies. And this is what the best technology lawyers among us are best at: finding those analogies, and explaining in detail why they fit technology facts.
Ian Kerr 1965-2019
Ian Kerr, who passed away far too young in 2019, was an incisive scholar and a much treasured colleague. The wit that sparkled in his papers was matched only by his warmth toward his friends, of whom there were many. He and his many co-authors wrote with deep insight and an equally deep humanity about copyright, artificial intelligence, privacy, torts, and much much more.
Ian was also a valued contributor to the Jotwell Technology Law section. His reviews here display the same playful generosity that characterized everything else he did. In tribute to his memory, we are publishing a memorial symposium in his honor. This symposium consists of short reviews of a selection of Ian’s scholarship, written by a range of scholars who are grateful for his many contributions, both on and off the page.
Ellen P. Goodman & Julia Powles, Urbanism Under Google: Lessons from Sidewalk Toronto,
__ Fordham L. Rev.
__ (forthcoming 2019), available at SSRN
National Geographic’s April 2019 issue focused on ‘cities’, presenting photographs, highlighting challenges, and wondering about the future. Its editor highlighted that two-thirds of the world’s population is expected to live in a city by 2050, and recent history is replete with unfinished or abandoned blueprints for what this future might look like. Yet in the field of technology law and urban planning, the biggest story of the last two years may well be that of Toronto, where a proposal to rethink urban life through data, technology, and redevelopment has prompted important reflections on governance, privacy, and control.
In Urbanism Under Google: Lessons from Sidewalk Toronto, forthcoming in the Fordham Law Review, Ellen P. Goodman and Julia Powles set out to tell the story of the ‘Sidewalk Toronto’ project, from its early announcements (full of promise but lacking in detail) to the elaborate (yet no less controversial) legal and planning documents now publicly available. Goodman and Powles contribute to the public and academic scrutiny of this specific project, but their critique of process and transparency will obviously be of value in many other cities, especially as ‘smart city’ initiatives continue to proliferate.
Sidewalk Labs, associated with Google through its status as a subsidiary of Alphabet (Google’s post-restructuring parent company), is working with Waterfront Toronto (the tripartite agency consisting of federal, provincial and municipal government) to redevelop a soggy piece of waterfront land, ‘Quayside’. Or is it? One of Goodman and Powles’ main observations, splendidly delivered as an a-ha moment halfway through the piece, is how the relationship between, on one hand, the Quayside proposal and, on the other, the wider idea of redeveloping waterfront lands has come to public attention. And indeed, the complexity of the project and its associated documents must have been a key driver for Goodman and Powles, as much of the article’s contribution comes from its careful and close reading of the extensive documentation now published by Sidewalk and by Waterfront Toronto.
Sidewalk Toronto has prompted many reactions. Some are enthusiastic about the promise of a great big beautiful tomorrow. Others see a dystopian surveillance (city-)state where every move is not just tracked but monetised. In Goodman and Powles’ account, the focus is on two issues: (1) the corporate and democratic issues arising out of the relationship between Waterfront Toronto and Sidewalk Labs (including the role of the different levels of government and scrutiny operating in Canada) (part 2), and (2) the specific ‘platform’ vision set out by Sidewalk in its plans (part 3).
The corporate and democratic issues highlighted primarily relate to the role of Waterfront Toronto, which is itself a distinctive agency within the Canadian administrative state. The authors argue that insufficient detail about the project was provided at early stages, and that the processes for public engagement did not allow for sufficient participation by the public (especially as a result of information asymmetries). Various incidents from this period, including leaks, resignations, and media commentary, are set out.
Considering the ‘platform’ vision, a set of concerns trouble the authors. Drawing upon Sidewalk’s own textual and visual representations of its intentions, they identify efficiency and datafication as the key outcomes of a platform-led approach, and situate these developments in a wider ‘smart city’ literature. Testing Ben Green’s work on asymmetries of power, and applying broader concepts such as legal scholar Brett Frischmann’s account of infrastructure and geographer Rob Kitchin’s discussion of the embedding of values, they find that Sidewalk’s approach is radical (or at least an exaggerated version of what is already seen in some smart city initiatives) and potentially a concentration of great power and influence.
One of the most fascinating concepts to emerge from Sidewalk’s plans, and the vibrant debate that these plans have provoked in Canada and elsewhere, is ‘urban data’. Sidewalk defines this broadly as data collected in public spaces and certain other spaces, normally de-identified. Goodman and Powles respond to this emergent (and non-statutory) category in both parts 2 and 3. They are not convinced by de-identification, self-certification, or the novelty of what Sidewalk proposes. Nor are they reassured by the emergent (and fashionable) idea of ‘data trusts’ as a way of addressing legal and popular concerns about the impact of initiatives like this on privacy and intellectual property, and questions of ownership and control associated with both issues.
On their return to the urban data’ issue in the later pages, and bringing together the democratic and platform issues, the authors raise a set of broader questions about control over shared spaces (adding to Lilian Edwards’ influential exploration of public/private divides in smart cities). Neatly, then, Lawrence Lessig’s rhetorical and conceptual use of the history of road building and city planning as paving the way for an understanding of (information) architecture as law comes full circle, as Goodman and Powles wonder how control over a digital layer calls into question the norms of planning and land use.
The authors end on a somewhat resigned note, highlighting the infamy and ‘inevitability’ of the project. Today, though, there is still some doubt as to whether Sidewalk’s plans will come to fruition in their current form. As the Toronto-based scholar Natasha Tusikov, well known to scholars of technology law through her pioneering investigation of industry-led governance of Internet ‘chokepoints’, has just written, the June 2019 release of Sidewalk Toronto’s master plan has led into a new and much-watched round of consultations. Sidewalk’s next steps may therefore disclose the extent to which the critiques set out here by Goodman and Powles can, if at all, be resolved within this project—although their analysis can also inform the work that other cities and development agencies might be planning.
Cite as: Daithí Mac Síthigh, We the North
(September 3, 2019) (reviewing Ellen P. Goodman & Julia Powles, Urbanism Under Google: Lessons from Sidewalk Toronto,
__ Fordham L. Rev.
__ (forthcoming 2019), available at SSRN), https://cyber.jotwell.com/we-the-north/
Ian Kerr 1965–2019
Our community, and the Jotwell Tech section, lost a giant this week. Ian Kerr, who died on August 27 of complications from cancer, was a contributing editor to this section since its founding in 2009, and a luminary of the field of law and technology for much longer. As Canada Research Chair in Law, Ethics, and Technology at the University of Ottawa, Kerr was a leader in in taking ethics and interdisciplinarity seriously. His expansive body of work often addressed questions years before others saw them.
Kerr embodied the Jotwell mission statement: to “identify, celebrate, and discuss the best.” He was unafraid to be laudatory; he was positive without apology. His writing could be bitingly funny, and he loved to let the air out of overinflated ideas, but he was unfailingly, irrepressibly generous toward his fellow scholars and his fellow humans. He won teaching award after teaching award. He welcomed, mentored, and was a constant source of encouragement and critical feedback for everyone—senior or junior, professor ,or student.
In Kerr’s words (about his alums and research team), we are “incredibly fortunate to find [ourselves] surrounded by such excellent people.” He himself was one of these excellent people, not least because he brought out what was excellent in everyone around him. We will miss his voice, his leadership, his brilliance, his friendship, his kindness and his irreverent grace.
Tarleton Gillespie’s important book Custodians of the Internet unpacks the simultaneous impossibility and necessity of content moderation, highlighting nuance rather than answering questions. Within big companies, content moderation is treated like custodial work, like sweeping the floors—and recent revelations reinforce that the abjectness of this work seems to contaminate those who do it. The rules are made by people in positions of relative power, while their enforcement is traumatic, poorly-paid, outsourced scutwork. But for major platforms, taking out the trash—making sure the site isn’t a cesspool—is in fact their central function.
Gillespie urges us to pay attention to the differences between a content policy—which is a document that both tries to shape the reactions of various stakeholders and is shaped by them—and actual content moderation; both are vitally important. (Facebook’s newly announced “Supreme Court” is on the former side: it will make important decisions, but make them at a level of generality that will leave much day-to-day work to be done by the custodial staff.) Every provision of a content policy represents a horror story and also something that will definitely be repeated. Gillespie is heartbreakingly clear on the banality of evil at scale: “a moderator looking at hundreds of pieces of Facebook content every hour needs more specific instructions on what exactly counts as ‘sexual violence,’ so these documents provide examples like ‘To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat’—which Facebook gives a green checkmark, meaning posts like that should stay.” (P. 112.)
Given the scale of decision-making, what else could be done? The Apple App Store preapproves apps, but constantly struggles because of the need for speed and the inevitable cries of censorship when Apple decides an app is too political. And Apple can get away with preapproval only because the flood of apps is orders of magnitude less than the flood of Instagram photos or similar content, and because app developers are relatively more likely to share ideas about reasonable expectations of a platform than are web users in general.
Most other platforms have relied on community flagging in the first instance. Relying on the users to report the need for clean-up is practically convenient, but also grants “legitimacy and cover” by signaling that the platform is listening to and trying to help users who are being harmed, while still leaving ultimate control in the platform’s hands. Mechanisms to report content can also be used by harassers, such as the person who reported hundreds of drag queens for violating Facebook’s real-name policy. YouTube sees flags surge when a new country gets access to the platform; Gillespie’s informant attributed this to users’ lack of knowledge of the YouTube community’s values. Gillespie sees in this reaction an “astounding assumption: a different logic would be to treat this new wave of flags as some kind of expression of values held by this new user population.” (P. 130.) Flagging will remain contested not just because values vary, but also because flagging relies on a devoted but small group of users to voluntarily police bad behavior, in conflict with another small group that deliberately seeks to cause mayhem.
There’s another approach, self-labeling, which the nonprofit Archive of Our Own (with which I volunteer) tries: ask users to evaluate their own content as they post it, and provide filters so users can avoid what they don’t want. This distributes the work more equitably. But tagging is time-consuming and can deter use, so commercial platforms make self-tagging limited, either relying on defaults or on rating entire users’ profiles, as on Tumblr. But self-tagging raises problems with consistency, since Tumblr users don’t always agree on what’s “safe,” not to mention what happens when Tumblr itself decides that “gay” is unsafe by definition. I’m obviously invested in the AO3; precisely because his analysis is so incisive, I wish Gillespie had spent a little time on what noncommercial platforms decide to do differently here and why.
Gillespie also has a great discussion of automated filtering. It’s relatively easy to compare images to hashes that screen out known child pornography images. But that relative ease is a product of law, technology, and policy that is hard to replicate for other issues. The database that allows screening is a high priority for platforms because of the blanket illegality of the content, and hashing known images is a far simpler task than identifying new images or their meaning in context. Microsoft developed the software, but recognized it as good PR to donate it for public use rather than keeping it as a trade secret or licensing it for profit, which isn’t true of other filtering algorithms like YouTube’s Content ID. Machine learning is trying to take on more complex tasks, but it’s just not very good yet. (After the book came out, we learned that Amazon’s machine learning tool for assessing resumes learned to discriminate against women—a machine learning algorithm aimed at harassment or hate speech will likewise replicate the biases of the arbiters who train the computer, and who tell it that “choke a bitch” is fine.) Also, we don’t really know what constitutes a “good” detection rate. Is 95% success identifying nude images good? What about a false positive rate of 1%? Are the false positives/negatives biased, the way facial recognition software tends to perform worse with darker-skinned faces?
Nor are the choices limited to removal; algorithmic demotion or screening allows people who know that the content exists to find it, while making it harder for others to stumble across it. But this makes platforms’ promise of sharing much more complicated: to whom are you visible, and when? These decisions can be hard for affected users to discover, much less understand, and they’re particularly important for marginalized groups. One good example is Tumblr’s shadow-banning of search terms like “porn” or “gay” on its app, with political consequences; similarly, it turns out that TripAdvisor reviewers may discover that they can’t use “feminism” or “misogyny” in their reviews (highlighting that algorithmic demotion always interacts with other policy choices). Meanwhile, YouTube and Twitter curate their trending pages to avoid sexual or otherwise undesired content, so it’s not really what’s trending but only an undeclared subset, which curation nonetheless never quite manages to avoid controversy or harm, as platforms rediscover every few months. Amazon does similar things with best-sellers to make sure that shapeshifter porn doesn’t get recommended to people who haven’t already expressed an interest in it. Users can manipulate this differential visibility, too, as we learned with targeted Facebook ads from Russians and others in the 2016 US presidential campaign.
Again, it might be worthwhile to consider the potential alternatives: the noncommercial AO3, like Wikipedia, has done very little to shape users’ searches, unlike YouTube’s radicalization machine.
In concluding, Gillespie judges content moderation to be so difficult that “all things considered, it’s amazing that it works at all, and as well as it does.” (P. 197.) Still, handing it over to private, for-profit companies, with very few accountability or transparency mechanisms, isn’t a great idea. At a minimum, he argues, platforms should have to be able to explain “why a post is there and how I should assess.” (P. 199.)
Gillespie suggests that Section 230 of the Communications Decency Act may warrant modification. He argues that it should still provide immunity for pure conduits and good faith moderation. “But the moment that a platform begins to select some content over others, based not on a judgment of relevance to a search query but in the spirit of enhancing the value of the experience and keeping users on the site,” (P. 43) it should be more accountable for the content of others. (I doubt this distinction would work at all.) Or, the safe harbor could be conditioned on having obligations such as meeting minimum standards for moderation, perhaps some degree of transparency or specific structures for due process/appeal of decisions. It is perhaps unsurprising that Facebook’s splashiest endeavor in this area, its “Supreme Court,” won’t provide these kinds of protections. It’s not in Facebook’s interest to limit its own flexibility in that way even if it is in Facebook’s interest to publicly perform adherence to certain general principles. This performance may well reflect sincere substantive commitments, but that also has advantages in fending off further regulation and in making bigness look good, because only big platforms like Facebook can sustain a “Supreme Court.” Although I have serious concerns about implementation of any procedural obligations (related to my belief that antitrust law would be a better source of regulation than blanket rules whose expense will ensure that no competitors to Facebook can arise), for purposes of this brief review it is probably more useful to note that the procedural turn is Gillespie’s own version of neutrality on the content of content policies. Authoritarian constitutionalism—where the sovereign adheres to principles that it announces but does not concede to its subjects the right to choose those principles—seems to be an easier ask than democratic constitutionalism for platforms.
Gillespie also suggests structural changes that wouldn’t directly change code or terms of service. He argues for greater diversity in the ranks of engineers, managers, and entrepreneurs, who currently tend to be from groups that are already winners and don’t see the downsides of their libertarian designs. This would be a kind of virtual representation that wouldn’t involve actual voting, whether for representatives or for policies, but would likely improve platform policymakers’ ability to notice certain kinds of harms and needs.
Innovation policy has focused on presenting and organizing information and content creation by users, not on innovation in governance and design or implementation of shared values; it could do otherwise. Gillespie’s most provocative question might be: What if we built platforms that tried to elicit civic preferences instead of consumer preferences? If only we knew how.
There once was a dream that was the Internet. But now the harsh morning light of Internet reality propels us to consider whether to get out of bed. Questions of content (un)trustworthiness seem omnipresent, and the liability protections of Section 230 of the Communications Decency Act are showing their age, in the opinion of their Congressional sponsors. Debates over “fake news,“ “deep fakes,” “shallow fakes,” and hybrid warfare reveal diverging ethical defaults, even among similarly situated Internet companies. Anti-vaxxers’ and medical professionals’ opinions are mistakenly considered equivalent by a portion of Internet users, and algorithms make personalized content recommendations that sometimes perpetuate false and radical beliefs. Recent indictments remind us that jurists have never resolved the question of who counts as a “publisher” on the Internet (and what duties of care that role entails). Meanwhile, machine-learning practitioners, information security experts, and other technology professionals debate the construction of shared ethical codes and professional practices. Each of these conversations inevitably implicates questions of content intermediation in technology contexts, as well as the role of “expert” knowledge, professional licensing/credentialing, and professional liability.
Claudia Haupt in Licensing Knowledge asks us to consider “whether expert knowledge is still relevant in the information age.” Answering in the affirmative, Haupt’s article offers an injection of helpful intellectual rigor into discussions of knowledge construction, expertise, and the First Amendment. Haupt engages head-on the question of the Yelpification of expertise and knowledge (and its corresponding quality control challenges) as she takes us on a thought-provoking, interdisciplinary romp into the complex issues of “expert” speech and its intersection with personalization, professional licensing, and liability. As the article explains, “[s]cholars of the legal profession have asserted that ‘[t]he Internet has provided consumers with increasing access to information about the law and to information about the quality of services provided.’” (P. 522.) Yet, the ability to judge the quality of this information presents challenges particularly because of the rise of the lay “Internet expert.” These information asymmetries impact information accuracy and warrant consideration.
After introducing the tension between the First Amendment interests of speakers and the interest of states in preserving high-quality services in commercial contexts, the article argues that expert and professional speech is part of an effort to re-calibrate existing licensing regimes. Walking the reader through caselaw spanning a wide array of “experts”—from tour guides to doctors—Haupt highlights that not all professional licensing schemes are the same, and not all occupations pose equivalent threats to health and safety. In particular, Haupt distinguishes between information and knowledge: a specific type of information communicated as professional advice. Professional knowledge, argues Haupt, is rooted in expertise formed in the context of a knowledge community and conveyed for the benefit of a particular client. Haupt explains that by adopting a framework that distinguishes information and knowledge, it is possible to reconcile the interests of speakers and listeners, thereby reconciling licensing regimes with the First Amendment.
Building on insights from the science and technology studies literature regarding the epistemological foundations of expert knowledge, the article next turns to reconciling democratization of knowledge with the challenge of maintaining information accuracy. Haupt explains that some scholars suggest that the public can rise to the level of lay experts with “experiential knowledge of a condition.” (P. 533.) However, other scholars discard the idea of a lay expert as an oxymoron, suggesting that the primary question turns on the formalized extension of knowledge and a determination of whose knowledge counts as expertise. Haupt then connects this dominant view to the idea of expertise being formed in knowledge communities. She explains that the modern idea of scientific expertise arises from two historically distinct elements: “occupational expertise and the expertise claimed by scientists as privileged knowers of truth about the world.” (P. 534.)
Haupt highlights that one distinction between professions and other occupations is the fusion of theory and practice. A claim of authority arises from the existence of a shared methodology within the knowledge community of professionals. Haupt explains that “the link between expertise and authority extends to the professions in that professional experts monopolize the ability to speak the truth.” (P. 538.) A knowledge asymmetry therefore persists, but this undemocratic reality, perhaps counterintuitively, presents the potential to advance public discourse. While the First Amendment frames expert knowledge as opinion equal to other opinions, licensing regimes by design create speaker inequality by acknowledging the role of listener interests.
Haupt explains that in the situation where both a professional licensing regime and a remedy in tort for malpractice exist, professional speech protection and licensing are actually complementary. The category of “professional speech,” argues Haupt, is thus fundamentally different and presents a “unique category of speech.” (P. 552.) It “reflects the shared knowledge of professionals” in the “knowledge community that is communicated from professional to client within the confines of a professional— client relationship.” She tells us that “bad professional advice is properly suppressed.” (P. 555.)
In a legal context, Haupt’s arguments might bring to mind the licensing/liability of broker dealers and investment advisors—a structure where compelled disclosure and regulatory oversight of personalized advice work in tandem with the dynamics of licensing, communities of practice, duties of care, and malpractice liability, without significant First Amendment concerns. Indeed, in the halcyon days of the late 1990s, the issues of personalization of investment advice, democratized access to markets, and market trust animated the SEC’s analysis of the permissibility of Internet brokerages such as E*TRADE. Thus, Haupt’s discussion offers insights that implicate a host of Internet-related trustworthiness and intermediation issues, both past and future.
The framing of Haupt’s argument around knowledge communities might also remind the reader of Michael Polanyi’s related insights on “tacit knowledge” and “public mental heritage” held by professionals in “normative dynamic orders.” Polanyi explains that “[i]n each field” generations pass on “a public mental heritage” comprised of both substantive and “tacit knowledge,” the unspoken but shared culture of “knowing how” one obtains only from being inside the community. In other words, relevant dynamic orders of experts, through consultation, competition, or a combination of the two, introduce new participants into their community of expertise. “Then, when they suggest their own additions or reforms, they return to the public and claim publicly that these be accepted by society–to become in their turn a part of the common heritage.” To wit, we might also view Haupt’s Licensing Knowledge itself as a noteworthy contribution to the public mental heritage of the legal profession.
Daniel Susser, Beate Roessler & Helen Nissenbaum, Online Manipulation: Hidden Influences in a Digital World
, available at SSRN
Congress has been scrambling to address the public’s widespread and growing unease about problems of privacy and power on information platforms, racing to act before the California Consumer Privacy Act becomes operative in 2020. Although the moment seems to demand practical and concrete solutions, legislators and advocates should pay close attention to a very timely and useful work by a set of philosophers. In a working paper entitled Online Manipulation: Hidden Influences in a Digital World, three philosophers–Daniel Susser, Beate Roessler, and Helen Nissenbaum–offer a rich and nuanced meditation on the nature of “manipulation” online. This article might provide the conceptual clarity required for the broad and sweeping kind of new law we need to fix much of what ails us. Although the article is theoretical, it could lead to some practical payoffs.
The article’s most important contribution is the deep dive it provides into the meaning of the manipulation, a harm separate and distinct from other harms more often featured in today’s technology policy discourse. Powerful players routinely deprive us of an opportunity for self-authorship over our own actions. Advertisers manipulate us into buying what we don’t need; platforms manipulate us into being “engaged” when we would rather be “enlightened” or “provoked” or “offline”; and political operatives manipulate us into voting against our interests. Taken together, these incursions into individual autonomy feed societal control, power imbalances, and political turmoil. The article builds on the work of many others, including Tal Zarsky, Ryan Calo (in an article that has received well-deserved praise from Zarsky in these pages), and Frank Pasquale, who have all written about the special problems of manipulation online.
The heart of the paper is an extended exploration into what it means to manipulate and how it differs from other forms of influence both neutral (persuasion) and malign (coercion). The philosophers focus on the hidden nature of manipulation. If I bribe or threaten or present new evidence to influence your decision-making, you cannot characterize what I am doing as manipulation, according to their definition, because my moves are visible in plain sight. I might be able to force you to take the decision I desire, which might amount to problematic coercion, but I have not manipulated you.
This insistence on hidden action might not square with our linguistic intuitions. We indeed might feel manipulated by someone acting in plain sight, and the authors are not trying to argue against these intuitions. Instead, they claim that by limiting our definition of manipulation to hidden action, we can clear up conceptual murkiness on the periphery of how we define and discuss different forms of discreditable influence. This is very useful ground clearing, helping manipulation stand on its own as a category of influence we might try to attack through regulation or technological redesign.
The piece convincingly links increased fears of manipulation, thus defined, to the current and likely future state of technology and the power of information platforms in particular. The pervasive surveillance of today’s information technology gives would-be manipulators access to a rich trove–Dan Solove’s digital dossiers and Danielle Citron’s reservoirs of danger—about each of us, which they can buy and use to personalize their manipulations. Knowing the secret manipulation formula for each individual, they can then use the “dynamic, interactive, intrusive, and personalized choice architectures” of platforms to give rise to what Karen Yeung calls “hypernudging.” Online tools hide such behavior, in the way they are designed to recede into the background; in one of the more evocative analogies in the paper, the authors argue that information technology operates more like eyeglasses than magnifying glasses, because we forget about them when we are using them. “A determined manipulator could not dream up a better infrastructure through which to carry out his plans” than today’s technological ecosystem, they conclude.
Having crafted their own definition of manipulation, and after connecting it to modern technology, the authors turn last to theories of harm. They focus on harm to autonomy, on the way manipulation undermines the ability of the manipulated “to act for reasons of their own.” We are treated like puppets by puppet masters pulling our strings; “we feel played.”
The cumulative effects of individual manipulations harm society writ large, posing “threats to collective self-government.” Consider the bolder claims of psychographic targeting made by the people at Cambridge Analytica before the last election, which if true suggest that “democracy itself is called into question” by online manipulation.
If Congress wants to enact a law prohibiting manipulative practices, this article offers some useful definitions: a manipulative practice is “a strategy that a reasonable person should expect to result in manipulation,” and manipulation is defined as “the covert subversion of an individual’s decision making.” Congress would be wise to enact this kind of law, perhaps adding it as a third prohibited act alongside deception and unfairness in section five of the FTC Act.
In addition, Congress could breathe new life into notice-and-choice regimes. Currently, we are asked to take for granted that users “consent” to the extensive collection, use, and sharing of information about them because they clicked “I agree” to a term-of-service pop-up window they once saw back in the mists of time. Were we to scrutinize the design of these pop-ups, assessing whether online services have used manipulative practices to coax users to “agree,” we might recognize the fiction of consent for what it really is. We should implicitly read or explicitly build into every privacy law’s consent defense a “no dark patterns” proviso, to use the phrase for manipulative consent interfaces by scholars like Woody Hartzog.
Finally, although these authors ground their work in the concept of autonomy, an unmeasurable concept not well-loved by economists, their argument could resonate in the god-forsaken, economics-drenched tech policy landscape we are cursed to inhabit. Manipulation, as they have defined it, exacerbates information asymmetry, interfering with an individual’s capacity to act according to preferences, resulting in market failure. A behavioral advertiser with a digital dossier “interferes with an agent’s decision-making process as they deliberate over what to buy. Worse yet, they may be enticed to buy even when such deliberation would weigh against buying anything at all.”
In fact, the authors go to lengths to explore how harmful manipulation interacts with the concept of nudges. Some nudges should count as manipulation, when their designs and mechanisms are hidden, even if they bring about positive behavioral change. The architects of the theory of nudges might even embrace this conclusion. The article quotes liberally from Cass Sunstein, who has explored the ethics of government-imposed nudges, acknowledging their sometimes manipulative quality. The article resonates with recent ruminations by Richard Thaler, who has coined a new term, “sludges,” the negative mirror image of positive nudges. These fathers of nudges are finally cottoning on to what privacy scholars have been writing about for years: at least online, the negative sludges we encounter seem to outnumber the positive nudges, with the gap widening every day.
We have a new target in our sights, whether we call them manipulative practices, dark patterns, or sludges: the technological tools and tricks that powerful information players use to treat us like their puppets and cause us to act against our own self-interest. By lending precision to the meaning of manipulation, this article can help us meet the challenge of many of the seemingly impossible problems before us.
Kiel Brennan-Marquez & Stephen E. Henderson, Artificial Intelligence and Role-Reversible Judgment
, __ J. of Crim. L. and Criminology
__ (forthcoming), available at SSRN
Are some types of robotic judging so troubling that they simply should not occur? In Artificial Intelligence and Role-Reversible Judgment, Kiel Brennan-Marquez and Stephen E. Henderson say yes, confronting an increasingly urgent question. They illuminate dangers inherent in the automation of judgment, rooting their analysis in a deep understanding of classic jurisprudence on the rule of law.
Automation and standardization via software and data have become a regulative ideal for many legal scholars. The more bias and arbitrariness emerge in legal systems, the more their would-be perfecters seek the pristine clarity of rules so clear and detailed that they can specify the circumstances of their own application. The end-point here would be a robotic judge, pre-programmed (and updated via machine learning) to apply the law to any situation that may emerge, calculating optimal penalties and awards via some all-commensurating logic of maximized social welfare.
Too many “algorithmic accountability” reformers, meanwhile, are in general either unaware of this grand vision of a legal singularity, or acquiescent in it. They want to use better data to inform legal automation, and to audit it for bias. The more foundational question is less often asked: Does the robo-judge not simply present problems of faulty algorithms and biased or inaccurate data, but something more fundamental—a challenge to human dignity?
Brennan-Marquez and Henderson argue that “in a liberal democracy, there must be an aspect of ‘role-reversibility’ to judgment. Those who exercise judgment should be vulnerable, reciprocally, to its processes and effects.” The problem with an avatar judge, or even some super-sophisticated robot, is that it cannot experience punishment the way that a human being would. Role-reversibility is necessary for “decision-makers to take the process seriously, respecting the gravity of decision-making from the perspective of affected parties.”
Brennan-Marquez and Henderson derive this principle from basic principles of self-governance:
In a democracy, citizens do not stand outside the process of judgment, as if responding, in awe or trepidation, to the proclamations of an oracle. Rather, we are collectively responsible for judgment. Thus, the party charged with exercising judgment—who could, after all, have been any of us—ought to be able to say:
This decision reflects constraints that we have decided to impose on ourselves, and in this case, it just so happens that another person, rather than I, must answer to them. And the judged party—who could likewise have been any of us—ought to be able to say: This decision-making process is one that we exercise ourselves, and in this case, it just so happens that another person, rather than I, is executing it.
Thus, for Brennan-Marquez and Henderson, “even assuming role-reversibility will not improve the accuracy of decision-making, it still has intrinsic value.”
Brennan-Marquez and Henderson are building on a long tradition of scholarship which focuses on the intrinsic value of legal and deliberative processes, rather than their instrumental value. For example, the U.S. Supreme Court’s famous Mathews v. Eldridge calculus has frequently failed to take into account the effects of abbreviated procedures on claimants’ dignity. Bureaucracies, including the judiciary, have enormous power. They owe litigants a chance to plead their case to someone who can understand and experience, on a visceral level, the boredom and violence portended by a prison stay, the brutal need resulting from the loss of benefits, the sense of shame that liability for drunk driving or pollution can give rise to. And as the classic Morgan v. United States held, even in complex administrative processes, the one who hears must be the one who decides. It is not adequate for persons to play mere functionary roles in an automated judiciary, gathering data for more authoritative machines. Rather, humans must take responsibility for critical decisions made by the legal system.
This argument is consistent with other important research on the dangers of giving robots legal powers and responsibilities. For example, Joanna Bryson, Mihailis Diamantis, and Thomas D. Grant have warned that granting robots legal personality raises the disturbing possibility of corporations deploying “robots as liability shields.” A “responsible robot” may deflect blame or liability from the business that set it into the world. It cannot truly be punished, because it lacks human sensations of regret or dismay at loss of liberty or assets. It may be programmed to look as if it is remorseful upon being hauled into jail, or to frown when any assets under its control are seized. But these are simulations of human emotion, not the thing itself. Emotional response is one of many fundamental aspects of human experience that is embodied.
Brennan-Marquez and Henderson are particularly insightful on how the application of law needs to be pervasively democratic in order to be legitimate. That is, of course, most obvious in the concept of the jury, but in a way that refines our common understanding of the practice. To understand “why the jury has long been celebrated as an organ of ‘folk wisdom,’” Brennan-Marquez and Henderson argue:
The idea is not that jurors have a better sense of right and wrong than institutional actors do. (Though that may also be true.) It is, more fundamentally, that jurors respond to the act of judgment as humans, not as officials, and in this respect, jury trials are a model of what role-reversibility makes possible: even when a jury trial does not lead to a different outcome than a trial before an institutional judge (or other fact-finding process), it facilitates the systemic recognition of judgment’s human toll. And even more fundamentally, it transforms the trial into a democratic act.
The common humanity of the judge (or agency director, or commissioner) and litigants is another reflection of the democratic nature of the polity that gives rise to a legal system.
It should come as little surprise that authoritarian legal systems are among the most enthusiastic for automatic, computational judgments of guilt or “trustworthiness.” Their concepts of “rule by law” place authorities above the citizenry they judge. By contrast, rule of law values, rooted in a democratic polity, require that any person dispensing justice is also eligible to be subject to the laws he or she applies.
Artificial Intelligence and Role-Reversible Judgment is a far-seeing project—one that aims to change the agenda of AI research in law, rather than merely improving its applications. Brennan-Marquez and Henderson carefully review the many objections scholars have raised to the data gathered for legal AI, and the theoretical objections to the vision of “artificial general intelligence” that seems necessary for computational legal systems to emerge. “We do not minimize any of these instrumental arguments in favor of human judgment,” they argue. “They are certainly valid today, and they may survive the next generation of AI. [But this article explores] what should happen if arguments like these do not survive.” The requirement for a human to evaluate arguments and dispense judgments in a legitimate legal system should give pause to those who are now trying to develop artificially intelligent judges. Why pursue the research program if it violates the role reversibility principle, which Brennan-Marquez and Henderson rightly characterize as a basic principle of democratic accountability?
Brennan-Marquez and Henderson’s work is a great example of how a keen phenomenology of the uncanniness and discomfort accompanying a vertiginously technified environment can deepen and extend our understanding of key normative principles. Judged by an avatar, one might wonder: “Who programmed it? What were they paid? Did they understand the laws they were coding? What could I have done differently?” The emerging European right to an explanation is meant to give persons some answers to such queries. But Brennan-Marquez and Henderson suggest that mere propositional knowledge is not enough. The “right to a human in the loop” in legal proceedings gains new moral weight in light of their work. It should be consulted by anyone trying to advance legal automation, and those affected by it.
Cite as: Frank Pasquale, Empathy, Democracy, and the Rule of Law
(May 8, 2019) (reviewing Kiel Brennan-Marquez & Stephen E. Henderson, Artificial Intelligence and Role-Reversible Judgment
, __ J. of Crim. L. and Criminology
__ (forthcoming), available at SSRN), https://cyber.jotwell.com/empathy-democracy-and-the-rule-of-law/
Any Internet regulation—from privacy to copyright to hate speech to network neutrality—must take account of the complex and messy dynamics of meme-fueled conflicts. And for that, An Xiao Mina‘s Memes to Movements is an essential guide.
Mina is not a traditional academic. She is a technologist, artist, and critic; her day job is Director of Products at Meedan, which builds tools for global journalism. But Memes to Movements draws fluently on cutting-edge work by scholars like Alice Marwick and Rebecca Lewis, Whitney Phillips, and Sasha Costanza-Chock, among many others. It is an outstanding synthesis, beautifully and clearly written, that gives an insightful overview of media and politics circa 2019.
Mina’s overarching point is that Internet memes—rather than being a frivolous distraction from serious political discourse—have become a central element of how effective social movements advance their political agendas. Their unique combination of virality and adaptability gives them immense social and communicative power. Think of a rainbow-flagged profile picture celebrating the Supreme Court’s same-sex marriage decision in 2015. The rainbow flag is universal; it makes the message of support immediately recognizable. But the picture is specific; it lets the user say, “I, me, personally support the right to marry.”
Memes do many kinds of work for movements. Memes allow participants to express belonging and solidarity in highly personalized ways, as with the rainbows. They let activists in repressive environments skirt the edges of censorship with playful wordplay. They enable activists to cycle rapidly through “prototypes” until they find ones with a compelling mass message. (There is an explicit parallel here to the technology industry’s use of rapid development practices; see also Mina’s recent essay on Shenzhen.) They help movements craft powerful narratives around a single immediately recognizable and easily graspable idea. One of the best extended examples in the book traces the gradual breakout of the #BlackLivesMatter hashtag from a surging sea of related memes: it had a popular poetic power that became widely apparent only as it started to catch on. And finally, the cycle closes: memes also let counter-movements use parodies and remixes to turn the ideas around for their own ends.
One of Mina’s most striking observations is the increasing importance of physical objects as memes, like mass-produced red MAGA caps and individually knitted pink pussy hats. Mina ties their rise both to globalized production and logistics networks and to individual craft work. The embeddedness of physical memes creates a powerful specificity, which in turn can fuel the spread of online ideas. Mina’s examples include the yellow umbrellas held by pro-democracy protesters in Hong Kong and the Skittles dropped by protesters calling attention to the death of Trayvon Martin.
As this last pair illustrates, Memes to Movements is a thoroughly global book. Mina discusses protest movements in the very different political environments of China and the United States with equal insight, and draws revealing parallels and contrasts between the two. The book is particularly sharp on how Chinese authorities sometimes defuse politically potent memes like Grass Mud Horse by allowing the natural forces of memetic drift to dilute them to the point that they no longer uniquely refer to prohibited ideas.
This is also a book that is deeply, depressingly realistic about the uses of power. Activists have no monopoly on memes; state actors deploy them for their own purposes. Government-sponsored memes can take the form of an anti-Hillary image macro or a patriotic pop song that seemingly comes out of nowhere. Indeed, these forms of propaganda are finely tuned to the Internet, just as Triumph of the Will was finely tuned to mass media. Marketers, too, pay close attention to the dynamics of virality, and Mina traces some of the cross-pollination among these different groups competing to use memetic tools most effectively. Kim Kardashian’s skill in promoting criminal justice reform is not so different in kind from her skill as a commercial influencer: she knows how to make a simple idea take off.
Above all, this is a compelling book on how attention functions in the world today, for better and for worse. It is a field guide to how groups and individuals—from Ayotzinapa 43 to Donald J. Trump—capture attention and direct it toward their preferred aims. Mina was writing perceptively about how Alexanda Ocasio-Cortez was winning Instagram long before it was cool.
What do Internet-law scholars have to learn from a book with very little discussion of Internet law? Just as much as family-law scholars have to learn from books about family dynamics, or intellectual-property scholars have to learn from books about creativity—Memes to Movements is an extraordinary guide to a social phenomenon the legal system must contend with. It describes democratic culture in action: it illustrates the idea-making on which law-making depends; it connects the micro scale of the creation and distribution of individual bits of content to the macro scale of how they shape politics and society. Plus it features elegant prose and charming pixel art by Jason Li. Fifty million cat GIFs can’t be wrong.
As more and more of our daily activities and private lives shift to the digital realm, maintaining digital security has become a vital task. Private and public entities find themselves in the position of controlling vast amounts of personal information and therefore responsible for assuring such information does not find its way to unauthorized hands. In some cases, there are strong incentives to maintain high standards of digital security, as security breaches are a real pain. When reports on such breaches are made public, they generate reputation costs, lead to regulatory scrutiny and often call for substantial out-of-pocket expenses to fix. Unfortunately, however, the internal incentives for maintaining high security standards are often insufficient motivators. In such cases, the security measures taken are unfitting, outdated and generally unacceptable. These are the instances where legal intervention is required.
There are several possible regulatory strategies to try and improve digital security standards. One option calls for greater transparency regarding breaches that led to personal data leakage and other negative outcomes. Another option calls upon the government to set data security standards and enforce them, at least in key sectors (more on these two options and their limitations, below). Yet an additional central form of legal intervention is through private litigation and the court system. However, key doctrinal hurdles in the United States currently make it extremely difficult to sue for damages resulting from security breaches. In an important recent paper, Daniel Solove and Danielle Citron, two prominent privacy scholars, explain what these hurdles are, how to overcome them, and why such doctrinal changes are essential.
As the authors explain, the key to many of the challenges of data security litigation is the concept of “harm”, or lack thereof. A finding of actual, tangible harm is crucial for establishing standing, which requires demonstrating an injury that is both concrete and actual (or at least imminent). Without standing, the case is thrown out immediately without additional consideration. Additionally, tort-based claims (as opposed to some property-based claims) require a showing of harm. And when examining data security claims, courts require tangible damages to prove harm. Security-related harms are often considered intangible. Therefore, many data security-related lawsuits are either immediately blocked or ultimately fail.
The complex issue of harm, standing and data security/privacy has been recently addressed by the U.S. Supreme Court in Clapper v. Amnesty International USA (where the Court generally rejected “hypothetical” injuries as sufficient to establish standing) and more recently in Spokeo Inc. v. Robins. In this latter case (addressing the standing and the FCRA) the Court has, at least in principle, recognized that intangible harms could be considered as sufficiently “concrete” if they generate the risk of real harm, and thus provide plaintiffs with standing. Furthermore, an additional case—Frank v. Gaos—is currently before the Supreme Court. While this latter case focuses on the practice of cy pres settlements in class actions, it appears to incidentally yet again raise questions related to standing, harms and digital security/privacy—this time with regard to referrer headers.
In response to the noted challenges security litigation faces, the authors call upon courts to enter the 21st century and accept changes to the doctrines governing the establishment of harm. They convincingly show that security breaches indeed create both harm and anxiety—but of somewhat different form. In fact, they assert, some courts have already begun to recognize harms resulting from data security breaches. For instance, courts have found that a “mere” increased risk of identity theft constitutes actual harm (even before such theft has occurred) when the data has made its way to the hands of cyber-criminals. The authors prod courts to push further in their expansion of the harm concept in the digital age. They note three major forms of injury which should be recognized in this context: (1) the risk of future injury, (2) the fact that individuals at risk must take costly (in time and money) preventive measure to protect against future injury, and (3) enhanced anxiety.
To make this innovative argument, the authors explain that data security breaches create unique concerns which justify the expansion of the concept of harm. For instance, they explain that damages (which might prove substantial) resulting from data breaches could be delayed. Therefore, recognizing harm at an earlier stage is essential. In addition, they argue that the risk of security harms might deter individuals from engaging in important and efficiency-enhancing activities such as seeking new employment opportunities and purchasing a new home. This is yet another strong argument for immediately creating a cause of action through the recognition of harm.
Judges are usually cautious about creating new rules, especially in common law systems. Yet the authors explain that in other legal contexts, such as medical malpractice, similar forms of intangible harms have already been recognized. They refer to cases based on actions that increased a chance of illness or decreased the chance of recovery. These have been recognized as actual harms—instances somewhat analogous to personal data leakage and the harms that might follow.
Yet broadening the notion of data “harm” has some downsides, such as attempts to “cheat” and manipulate by plaintiffs. This is because intangible harms are easier to fake or fabricate, and because the definition of intangible harm might be too open-ended. In addition, broadening the notion of harm might lead to confusion for the courts. To mitigate some of these concerns, the authors introduce several criteria to assist courts in establishing and assessing harm in this unique context. These include the likelihood and magnitude of future injury as well as the mitigating and preventive measures those holding the data have taken.
Finally, the authors confront some broader policy questions pertaining to their innovative recommendations. Litigation, of course, is not the only way to try and overcome the problems of insecure digital systems. It probably isn’t even the best way to do so. I have argued elsewhere that courts are often an inadequate venue for promoting cybersecurity objectives. Litigation is costly to all parties. It also might stifle innovation and end up merely enriching the parties’ lawyers. In addition, judges usually lack the proper expertise to decide on these issues. Furthermore, in this context, ex post court rulings are an insufficient motivator to ensure that proper security measures will be set in place ex ante, given the issue’s complexity and the difficulties of proving causation (i.e. the linkage between the firm’s actions or omissions and the damages that follow at a later time).
The authors would probably agree with these assertions and indeed acknowledge most of them in their discussion. Nonetheless, they argue that other regulatory alternatives such as breach notification requirements and regulatory enforcement suffer from flaws as well. This is, no doubt, true. Breach notifications might generate insufficient incentives for data collectors to minimize future breaches, as users might be unable or unwilling to voice or act on their disappointment with the flawed security measures adopted. And data security regulatory enforcement might suffer from the usual shortcomings of governmental enforcement—it being too minimal, not up to date and at times subject to capture. Litigation, the authors argue, could fill a crucial void when other options fail. They state that “data-breach harms should not be singled out” as problematic relative to other kinds of legal harms. Therefore, courts should have the option to find that harm has been caused and thus additional legal actions must be taken when they have good reasons to do so.
Using doctrinal barriers (such as refraining from acknowledging new forms of harm) to block off specific legal remedies is an indirect and somewhat awkward strategy. Yet it is also an acceptable measure to achieve overall policy goals. The authors convincingly argue that (all) judges should have the power to decide on a case’s merits, yet by doing so the authors inject uncertainty into the already risky business of data security. If this proposal would be ultimately accepted, let us hope that judges use this power responsibly. If Solove and Citron’s proposals are adopted, judges should look beyond the hardship of those victimized by data breaches and consider the overall interests of the digital ecosystem before delivering their judgement in digital security cases.