Advertisement

Columns

What Does Facebook Think Free Speech is For?

{shortcode-109cf723a205495a40f5468a16c6b50415d26583}

“Who should decide what is hate speech in an online global community?” That’s the question Richard Allan, Facebook’s Vice President for Policy in the Middle East and Asia, is asking in the wake of reporting on the social network’s content moderation guidelines. Reporting group ProPublica’s headline—“Facebook’s Secret Censorship Rules Protect White Men from Hate Speech But Not Black Children”—captures our almost dystopian fear of an all-powerful corporation rigging political discourse to serve shareholders, advertisers, and procrastinators the world over. Just imagine the 7,500-strong “community operations team” as uniformed propagandists searching for content that bucks the party line, and your Orwellian masterpiece is off to a fine start.

At first glance, removing hate speech might seem to depend exclusively on moderators’ ability to judge which posts cause serious harm to users—a task difficult only because determining that harm is so tricky. Yet as Facebook acknowledges, its own categories of hate speech don’t function purely as immunizations from feeling threatened by others online.

For example, categorically demeaning African arrivals to Italy violates the social network’s rules, but advocating for proposals to deny refugees Italian welfare does not. And this remains true even if both actions cause comparable suffering to their migrant subjects. As Allan explains with reference to German debates on migrants, “we have left in place the ability for people to express their views on immigration itself. And we are deeply committed to making sure Facebook remains a place for legitimate debate.” In other words, Facebook will permit some “legitimate” posts in spite of their potential to harm shielded groups.

What kind of debate qualifies as legitimate in Facebook’s eyes? The company doesn’t say. One approach is to classify hateful content, like much-scrutinized fake news, as a subset of false speech. Group-focused hate speech contains generalizations or arguments that take no time to debunk, while more involved political content requires prohibitive resources to fact-check properly.

Advertisement

However, even if removing egregiously incorrect posts were a good idea, Facebook uses other variables to decide the boundaries of legitimate discussion. When then-presidential candidate Donald Trump called for a ban on Muslims entering the United States, he likely ran afoul of the site’s rules against “calling for exclusion” of protected classes—but reports indicate that Facebook CEO Mark E. Zuckerberg, a former member of the Class of 2006, permitted the content to remain on his platform “because it was part of the political discourse.” The company’s efforts to exclude hate do not amount to eradicating falsehood.

Facebook’s selective moderation suggests that legitimate content for the company is not necessarily true or respectful content, but material whose publication it deems valuable from the public’s point of view. Even if the social network could have stopped users from hearing Trump’s Muslim ban speech, for instance, doing so would have prevented voters from learning something important about the candidate’s policy preferences.

This desire to inform citizens just illustrates how any outfit’s censorship practices—or lack thereof—reflect a normative set of ideas about what best serves the interests of users. When Facebook, Google, or others frame content regulation as concerned with the “safety” of users, they mask the extent to which that safety is just one piece of a broader, and possibly controversial, conception of how we should lead our digital lives.

A social network that helps to structure the discourse of nearly two billion individuals ought to justify the design it chooses for them. And to its credit, Facebook seems more interested than just about any other technology company in giving explicit voice to its vision of “building global community.” But the fact that the company’s moderation guidelines were developed ad hoc and without user input over the span of several years is worrying and hard to defend. When we stop pretending that online “platforms” are amoral structures, we also see the urgent need to scrutinize their foundations.

As it stands, the question of who ought to define and regulate hate speech is a moot one. With the exception of some European authorities, Facebook and other companies are already answering it for us, whether or not we accept their verdicts. Undoubtedly, many well-intentioned technologists envision a future in which online platforms guide political and social debate to be as robust as possible. But absent major changes, we can only hope that their utopia is not our dystopian future.

Gabriel H. Karger ’18 is a philosophy concentrator in Mather House.

Tags

Recommended Articles

Advertisement