The Journal of Things We Like (Lots)
Select Page
Kate Crawford & Tarleton Gillespie, What is a flag for? Social media reporting tools and the vocabulary of complaint, New Media & Society (2014), available at SSRN.

The problem of handling harassing and discriminatory online speech, as well as other forms of unpleasant and unlawful content—infringing, privacy-invading, or otherwise tortious—has been a matter for public discussion pretty much since people noticed that there were non-governmental intermediaries involved in the process. From revenge porn to videos of terrorist executions to men kissing each other to women’s pubic hair, controversies routinely erupt over whether intermediaries are suppressing too much speech, or not enough.

“Flagging” offensive content is now an option offered to users across many popular online platforms, from Facebook to Tumblr to Pinterest to FanFiction.net. Flagging allows sites to outsource the job of policing offensive content (however defined) to unpaid—indeed, monetized—users, as well as to offer a rhetoric to answer charges of censorship against those sites: the fact that content was reported makes the flagging user/s responsible for a deletion, not the platform that created the flagging mechanism. But the meaning of flags, Crawford and Gillespie persuasively argue, is “anything but straightforward.” Users can use flags strategically, as can other actors in the system who claim to be following community standards.

One of the most significant, but least visible, features of a flagging system is its bluntness. A flag is binary: users can only report one level of “badness” of what they flag, even if they are allowed several different subcategories to identify their reasons for flagging. Nor are users part of the process that results, which is generally opaque. (As they note, Facebook has the most clarity on its process, likely not because of its commitment to user democracy but because it has faced such negative PR over its policies in the past.)

Another, related feature is flagging’s imperviousness to precedent—the memory-traces that let communities engage in ongoing debates about norms, boundaries, and difficult marginal judgments. Crawford and Gillespie explain:

[F]lags speak only in a narrow vocabulary of complaint. A flag, at its most basic, indicates an objection. User opinions about the content are reduced to a set of imprecise proxies: flags, likes or dislikes, and views. Regardless of the proliferating submenus of vocabulary, there remains little room for expressing the degree of concern, or situating the complaint, or taking issue with the rules. There is not, for example, a flag to indicate that something is troubling, but nonetheless worth preserving. The vocabulary of complaint does not extend to protecting forms of speech that may be threatening, but are deemed necessary from a civic perspective. Neither do complaints account for the many complex reasons why people might choose to flag content, but for reasons other than simply being offended. Flags do not allow a community to discuss that concern, nor is there any trace left for future debates. (P. 7.)

We often speak of the internet as a boon for communities, but it is so only in certain ways, and website owners can structure their sites so that certain kinds of communities have a harder time forming or discussing particular issues. Relatedly, YouTube’s Content ID, now a major source of licensing revenue for music companies, allows those companies to take down videos to which they object regardless of the user’s counternotifications and fair use claims, because Google’s agreements with the music companies go beyond the requirements of the DMCA. No reasoned argument need be made, as it would be in a court of law, and so neither the decisionmakers nor the users subject to YouTube’s regime get to think through the limiting principles—if any—applied by the algorithms and/or their human overlords. I have similar concerns with Amazon’s Kindle Worlds (and the Kindle’s ability to erase or alter works that Amazon deems erroneously distributed, leaving no further trace) compared to the organic, messy world of noncommercial fan fiction.

This is a rich paper with much to say about the ways that, for example, Flickr’s default reporting of images as “inappropriately classified” rather than completely unacceptable structures users’ relation to the site and to each other. “Whether a user shoehorns their complex feelings into the provided categories in a pull-down menu in order to be heard, or a group decides to coordinate their ‘complaints’ to game the system for political ends, users are learning to render themselves and their values legible within the vocabulary of flags.” Crawford and Gillespie’s useful discussion also offers insights into other forms of online governance, such as the debates over Twitter’s reporting system and the merits of “blocking” users. A “blocking” feature, available for example on Tumblr and Twitter, enables a logged-in user to avoid seeing posts from any blocked user; the offensive user disappears from the site, but only from the blocker’s perspective. Like denizens of China Miéville’s Besźel and Ul Qoma, they occupy the same “space” but do not see each other. This literalization of “just ignore the trolls” has its merits, but it also allows the sites to disclaim responsibility for removing content that remains visible to, and findable by, third parties. We may be able to remake our view of the world to screen out unpleasantness, but the unpleasantness persists—and replace “unpleasantness” with “slander and threats” and this solution seems more like offering victims blinders rather than protecting them.

What about total openness instead? As Crawford and Gillespie point out, Wikipedia generally retains a full history of edits and removals, but that process can also become exclusionary and opaque in other ways. Nonetheless, they suggest that an “open backstage” might offer a good way forward, in that it could “legitimize and strengthen a site’s decision to remove content. Significantly, it would offer a space for people to articulate their concerns, which works against both algorithmic and human gaming of the system to have content removed.” Moreover, an “open backstage” would emphasize the ways in which platforms are social systems where users can and should play a role in shaping norms.

I’m not as sanguine about this prospect. As Erving Goffman explained so well, even “backstage” is in fact a performance space when other people are watching, so I would expect new and different forms of manipulation (as has happened on Wikipedia) rather than a solution to opacity. Proceduralization and the ability to keep arguing endlessly can be a recipe for creating indifference by all but a tiny, unrepresentative fraction of users, which arguably is what happened with Wikipedia. It’s a new version of the old dilemma: If people were angels, no flags would be necessary. If angels were to govern people, neither external nor internal controls on flags would be necessary.

As someone who’s been deeply involved in writing and subsequently revising and enforcing the terms of service of a website used by hundreds of thousands of people, I know all too well the impossibility of writing out in advance every way in which a system might be abused by people acting in good faith, or even just (mis)used by people who simply don’t share its creators’ assumptions. Open discussion of core discursive principles can be valuable for communities; but freewheeling discussion, especially of individual cases, can also be destructive. And, as Dan Kahan has so well explained, our different worldviews often mean that a retreat from one field (from ideology to facts, or from substance to procedure, or vice versa) brings all the old battles to the new ground.

Still, there’s much to like about the authors’ call for a system that leaves some traces of debates over content and the associated worldviews, instead of a flagging and deletion system that “obscures or eradicates any evidence that the conflict ever existed.” Battles may leave scars, but that doesn’t mean that the better solution is a memory hole.

Download PDF
Cite as: Rebecca Tushnet, What is a Theorist For? The Recruitment of Users into Online Governance, JOTWELL (August 14, 2015) (reviewing Kate Crawford & Tarleton Gillespie, What is a flag for? Social media reporting tools and the vocabulary of complaint, New Media & Society (2014), available at SSRN), http://cyber.jotwell.com/what-is-a-theorist-for-the-recruitment-of-users-into-online-governance/.