Computerworld

Facebook taps Australian expertise to help counter hate speech

USyd and UQ research project backed by social media giant

Facebook is backing an Australian research project to help it better regulate the hate speech that starts and spreads on its platform.

A joint project proposed by researchers at the University of Sydney and the University of Queensland received $77,000 in funding from the social media giant earlier this month.

The year-long study will be carried out by Dr Aim Sinpeng and Dr Fiona Martin from the University of Sydney and Professor Katharine Gelber and Dr Kirril Shields from the University of Queensland.

“Facebook rarely gives grants to non-computer science researchers. This was indeed the first time that they requested proposals from non-STEM experts,” Dr Sinpeng, an expert on political engagement on Facebook in the region, said.

The study will investigate and audit what constitutes hate speech in different Asia Pacific jurisdictions and how well Facebook’s policies and procedures are able to identify and regulate this type of content.

The researchers will also map hate speech networks in India, the Philippines, Indonesia, Myanmar and Australia, in order to understand how harmful content is amplified, what actors are involved, and how their activities can be mitigated.

“We aim to find key drivers of hate speech in each of the given networks and examine the factors that help drive its popularity and spread,” said Professor Gelber, head of University of Queensland’s Political Science and International Studies school.

Gelber said by helping Facebook identify linguistically and culturally specific forms of hate speech in the Asian region, the team could suggest improvements to definitions of, and responses to hate speech, and improve the social network’s policy globally.

Facebook is working to counter mounting criticism about its role in spreading hate speech and violent material.

Earlier this month it banned a number of outspoken far-right figures, for violating its hate speech policies. Last week it announced a pilot program in which human content reviewers would be dedicated to identifying and eliminating hate speech.

Figures released by the company reveal in the first quarter of this year it has taken down four million hate speech posts. It “continues to make progress on proactively identifying hate speech” the company claims, adding that it is able to detect 65 per cent of the hateful content it removes, up from 24 per cent in 2017.

“We continue to invest in technology to expand our abilities to detect this content across different languages and regions,” Facebook says.

Facebook was one of the signatories of the ‘Christchurch Call’, which commits it to taking steps to address the uploading and dissemination of “terrorist and violent extremist content”. Facebook Live was used by the Christchurch massacre gunman to stream the attack, and viewed 200 times during the broadcast and about 4000 times in total before being removed.

Users shared and reuploaded the video to the platform: Facebook said it had removed 1.5 million videos of the attack globally, 1.2 million of them at upload.

“In light of the Christchurch Call, this is a critical moment for building a worldwide effort against the spread of organised hate speech, and we aim to help with that,” said Dr Martin.

“It is wonderful to be part of Facebook’s effort to collaborate with researchers from across the globe in tackling this insidious problem, and helping it address its content regulation challenges,” she added.