Facebook’s failure in Myanmar is the work of a blundering toddler
Human rights groups and researchers have been warning Facebook that its platform was being used to spread misinformation and promote hatred of Muslims, particularly the Rohingya, since 2013. As its user base exploded to 18 million, so too did hate speech, but the company was slow to react and earlier this year found its platform accused by a UN investigator of fuelling anti-Muslim violence.
The Australian journalist and researcher Aela Callan warned Facebook about the spread of anti-Rohingya posts on the platform in November 2013. She met with the company’s most senior communications and policy executive, Elliott Schrage. He referred her to staff at Internet.org, the company’s effort to connect the developing world, and a couple of Facebook employees who dealt with civil society groups. “He didn’t connect me to anyone inside Facebook who could deal with the actual problem,” she told Reuters.
In mid-2014, after false rumours online about a Muslim man raping a Buddhist woman triggered deadly riots in the city of Mandalay, the Myanmar government requested a crisis meeting with Facebook. Facebook said that government representatives should send an email when they saw examples of dangerous false news and the company would review them.
It took until April this year – four years later – for Mark Zuckerberg to tell Congress that Facebook would step up its efforts to block hate messages in Myanmar, saying “we need to ramp up our effort there dramatically”.
Since then it has deleted some known hate figures from the platform, but this week’s Reuters investigation – which found more than 1,000 posts, images and videos attacking Myanmar’s Muslims – shows there’s a long way to go.
A key issue that civil society groups focus on is Facebook’s lack of Burmese-speaking content moderators. In early 2015, there were just two of them.
Up until Wednesday of this week, Facebook has refused to reveal how many Burmese content reviewers it had hired since.