Culture

Turns Out 1 In Every 1000 Posts Seen On Facebook Is Hate Speech

And moderators are still being subject to unfair work conditions to take harmful content off our screens.

Want more Junkee in your life? Sign up to our newsletter, and follow us on Instagram, Twitter and Facebook so you always know where to find us.

But Facebook says almost 95 percent of the hate speech content removed is proactively detected by AI.

In its latest quarterly Community Standards Enforcement Report, Facebook has made the ‘prevalence’ of hate speech on its platform public for the first time. Guy Rosen, VP of Integrity at Facebook describes the ‘prevalence’ measure as “the most important metric. Prevalence is like an air quality test to measure pollution”.

And while strides have been made in proactive detection of hate speech, the platform still has a lot of work to do.

The report found 0.11 percent of all views on the platform are identified as that of hate speech. Though this does not sound like a lot, it is. It means 1 in every 1000 posts that Facebook’s average of 1.82 billion daily users see, include hate speech.

This quarter alone 22.1 million pieces of content were removed, 95 percent of which were detected by AI. This time last year, 6.9 million piece of content were removed, 80 percent of which were detected by AI. As the platform has only made the prevalence of hate speech public this quarter, it is not clear why exactly there has a rise of about 15 million more pieces of content being removed compared to a year ago.

Though Facebook’s AI technology has been improved, this still leaves around a million pieces of content to be detected by users. When this happens, one of Facebook’s 35,000 content reviewers take over and moderate the content. Even in cases where AI has detected content, moderators may have to step in. While AI systems remove content that is near-certain to be violating Facebook’s hate speech rules, situations that require context and understanding of nuance are handed over to moderators.

Facebook moderators have suffered terrible work conditions, and exposure to some content has led to some developing PTSD-like symptoms. That’s not all, just a few weeks ago, while other company employees were told to work from home until the middle of 2021, moderators have been called back to offices where they have been exposed to COVID-19.

In an open letter, 200 moderators from around the world have alleged Facebook has refused them hazard pay, and called the AI moderation a failure as it does not capture the most harmful content on the site.

“At the start of the pandemic, both full-time Facebook staff and content moderators worked from home. To cover the pressing need to moderate the masses of violence, hate, terrorism, child abuse, and other horrors that we fight for you every day, you sought to substitute our work with the work of a machine.

Without informing the public, Facebook undertook a massive live experiment in heavily automated content moderation. Management told moderators that we should no longer see certain varieties of toxic content coming up in the review tool from which we work— such as graphic violence or child abuse, for example.

The AI wasn’t up to the job. Important speech got swept into the maw of the Facebook filter—and risky content, like self-harm, stayed up.

The lesson is clear. Facebook’s algorithms are years away from achieving the necessary level of sophistication to moderate content automatically. They may never get there.

This raises a stark question. If our work is so core to Facebook’s business that you will ask us to risk our lives in the name of Facebook’s community—and profit—are we not, in fact, the heart of your company?”

Even with moderators, a crucial gap is left by Facebook’s systems to stop hate speech on the platform. Languages. Facebook says they have increased their capacity to over 50 languages, and improved detection capabilities in English, Arabic and Spanish. But Facebook offers users the ability to choose from over a 100 languages, and has more than 70 percent of its users based in Asia Pacific and what it calls the “rest of world”. The majority of languages on Facebook don’t have translated information about reporting any harmful content, let alone hate speech, which has led to the company being accused of treating certain markets purely as business opportunities.

In India, for example, Facebook has allowed Islamophobic speech to remain on its platform to avoid upsetting the ruling nationalist party in a move described as “at best, Facebook is complicit through inaction, and at worst it shows outright deference to violent ethno-nationalist forces in the region”.

Hate speech circulating on Facebook in Ethiopia has also had real life ramifications, pushing the country into ethnic violence and close to genocide. Little is known about content moderation efforts and its success there.

Though Facebook has been at the centre of hate speech leading to real life ethnic violence in many places around the globe, it is clear the company is not keeping up, either because it cannot or because it does not prioritise the global communities’ wellbeing. And while hate speech might be disappearing of our feeds in Australia, we must remember we are the lucky ones.