“Social media companies have created, allowed and enabled extremists to move their message from the margins to the mainstream,” said Jonathan A. Greenblatt, chief executive of the Anti-Defamation League, a nongovernmental organization that combats hate speech. “In the past, they couldn’t find audiences for their poison. Now, with a click or a post or a tweet, they can spread their ideas with a velocity we’ve never seen before.”
Facebook said it was investigating the anti-Semitic hashtags on Instagram after The New York Times flagged them. Sarah Pollack, a Facebook spokeswoman, said in a statement that Instagram was seeing new posts related to the shooting on Saturday and that it was “actively reviewing hashtags and content related to these events and removing content that violates our policies.”
YouTube said it has strict policies prohibiting content that promotes hatred or incites violence and added that it takes down videos that violate those rules.
Social media companies have said that identifying and removing hate speech and disinformation — or even defining what constitutes such content — is difficult. Facebook said this year that only 38% of hate speech on its site was flagged by its internal systems. In contrast, its systems pinpointed and took down 96% of what it defined as adult nudity, and 99.5% of terrorist content.
YouTube said users reported nearly 10 million videos from April to June for potentially violating its community guidelines. Just under one million of those videos were found to have broken the rules and were removed, according to the company’s data. YouTube’s automated detection tools also took down an additional 6.8 million videos in that period.
A study by researchers from MIT that was published in March found that falsehoods on Twitter were 70% more likely to be retweeted than accurate news.