“Loaded Words: The Challenges of Effective Content Moderation in Politically Charged Environments”
During the eruption of violence between Israel and Palestine in May, Facebook sparked new controversy with its content moderation practices, this time regarding pro-Palestine content. One practice in particular reaches the heart of Facebook’s often-controversial and blunt treatment of hate speech: its treatment of the word “Zionist.”
Zionism is an ideology and movement to establish a Jewish national state in the region of Palestine, including current-day Israel.
Although Facebook has not published any official any statement on the topic, Facebook spokespeople have been giving the same answer when reached for comment; spokesperson Dani Lever told The Washington Post in May: “Under our current policies, we allow the term ‘Zionist’ in political discourse, but remove attacks against Zionists in specific circumstances, when there’s context to show it’s being used as a proxy for Jews or Israelis, which are protected characteristics under our hate speech policy.”
How Facebook deciphers the “context” and “specific circumstances” of a post mentioning Zionism, and how it makes the ensuing moderation decision, are hazy at best; but according to some Palestine advocates, the result disproportionately falls in favor of content takedowns. Although “Zionist” can certainly be part of racist rhetoric and hate speech, its meaning depends on both speaker and context. Far from being a synonym for ‘Jew,’ many Palestinians — and Jews — use “Zionist” to characterize and criticize what they see as Israel’s discriminatory policies, like their settlement policy and seizures of Palestinian homes; and to describe a history of colonization in Palestine. Censoring “Zionist” indiscriminately, or with little regard for nuance, could easily stifle critical political discourse, reinforce the suppression of dissenting voices, and limit protest mechanisms for Palestinian voices and supporters.
The complex contextuality of such a word, paired with Facebook’s vague moderation policies, creates the perfect storm–one that rains on much of the political discourse on the platform. The trouble begins with Facebook’s first line of defense against hate speech: artificial intelligence. In 2019, 80% of the hate speech Facebook acted upon was flagged first by algorithms. This automation entrenches the structural biases and dataset inaccuracies with respect to lesser-spoken languages, as well as against cultures for which Facebook has developed fewer policy standards.
Furthermore, algorithms may not account for nuanced political conditions — as in the case of Zionism, slang, or how charged words can be used by the people they normally attack. Blanket bans on racially-charged words have censored voices who write about race and raise awareness about social justice causes. Beyond initial algorithmic flagging, Facebook’s arcane guidelines for hate speech — like the difference between quasi-protected and protected speech — means that some users battle veiled censors to avoid a defeating, often incomprehensible, fate in “Facebook jail,” a suspended account. In light of these criticisms, Facebook is taking steps to ameliorate blanket, “race-blind” moderation through the WoW Project, which aims to deprioritize “low-sensitivity” comments against those not historically marginalized and increase efforts towards content that experts and users agree is contentious and severely harmful.
To achieve a more nuanced treatment of online speech, continued investment in human content moderation is necessary, including hiring staff from a variety of backgrounds, languages, and perspectives. For example, while Facebook has a policy team for Israel and the Jewish diaspora — led by a prior advisor to former Israeli Prime Minister Benjamin Netanyahu — Palestine is covered by a general “Middle East and North Africa” team. In this light, it is perhaps unsurprising that the nonprofit 7amleh found that from January 2020 to June 2020, Facebook complied with 81% of take down requests issued by Israel’s Cyber Unit (a group within the Israeli State Attorney Office), which were often related to Palestine, while none of the 7 requests issued by the Palestinian Authority during that time were accepted by Facebook.
If Facebook were to make available better data about which posts it takes down and why, as well as offer a well-functioning review system for censored content, this would help with more nuanced content moderation, and would also be important steps towards more transparency and public involvement in political speech on Facebook. For example, by sending copies of the takedown notices it receives to the Lumen database, Facebook could share its data with researchers and the public, increasing transparency around its moderation processes. Furthermore, new user interface designs and robust user flagging systems can help people gain control over their Facebook feeds and promote productive political discourse — in order to prevent hate speech in the first place. Indeed, as Jillian York, the director of International Freedom of Expression for the Electronic Frontier Foundation, argues regarding the complexity of the word Zionist, “Ultimately, it demonstrates the subjectivity of expression, and why censorship should, if anything, only be a last resort.”
—
About the author: Gina Markov is a rising senior at Yale University pursuing a bachelor’s degree in Applied Mathematics. Gina is passionate about bridging technology, policy, and ethics, and she is spending the summer as an intern for the Lumen database, doing work related to privacy, copyright, and online speech. In her free time, she enjoys long distance running and hiking, and explores interests in science fiction, antitrust, and francophone literature.