Older Americans are disproportionately more likely to share fake news on Facebook, according to a new analysis by researchers at New York and Princeton Universities. Older users shared more fake news than younger ones regardless of education, sex, race, income, or how many links they shared. In fact, age predicted their behavior better than any other characteristic — including party affiliation.
The role of fake news in influencing voter behavior has been debated continuously since Donald Trump’s surprising victory over Hillary Clinton in 2016. At least one study has found that pro-Trump fake news likely persuaded some people to vote for him over Clinton, influencing the election’s outcome. Another study found that relatively few people clicked on fake news links — but that their headlines likely traveled much further via the News Feed, making it difficult to quantify their true reach. The finding that older people are more likely to share fake news could help social media users and platforms design more effective interventions to stop them from being misled.
Today’s study, published in Science Advances, examined user behavior in the months before and after the 2016 US presidential election. In early 2016, the academics started working with research firm YouGov to assemble a panel of 3,500 people, which included both Facebook users and non-users. On November 16th, just after the election, they asked Facebook users on the panel to install an application that allowed them to share data including public profile fields, religious and political views, posts to their own timelines, and the pages that they followed. Users could opt in or out of sharing individual categories of data, and researchers did not have access to the News Feeds or data about their friends.
Jonathan Albright in Medium:
In 2016, in our discussions about Facebook and the election, we tended to focus mostly on Pages. And paid “ads.” Well, it’s 2018, and this time around, we have another problem to talk about: Facebook Groups. In my extensive look into Facebook, introduced in the previous post, I’ve found that groups have become the preferred base for coordinated information influence activities on the platform. This is a shift that reflects the product’s most important advantage: the posts and activities of the actors who join them are hidden within the Group. Well, at least until they choose to share them.
Inside these political Groups, numbering anywhere from the tens of thousands to the hundreds of thousands of users, activities are perfectly obscured. However, as I will show, the effects of these activities can be significant. The individual posts, photos, events, and files shared within these groups are generally not discoverable through the platform’s standard search feature, or through the APIs that allow content to be retrieved from public Facebook pages. Yet once the posts leave these groups, they can gain traction and initiate large-scale information-seeding and political influence campaigns.
As a result, the actors who used to operate on Pages have now colonized Groups and use them more than ever. This analysis found disinformation and conspiracies being seeded across hundreds of different groups, most falling into what would best be described as political “astroturfing.”
Yes, Facebook groups will end in tears too.
A Labour MP has asked Theresa May whether she or any other minister had ever declined a request from the security services to conduct an investigation into the controversial Leave.EU campaign donor Arron Banks.
Ben Bradshaw wrote to the prime minister a day after it was announced that a criminal investigation into Banks had begun, amid repeated allegations that May had blocked an investigation in 2016, when she was home secretary.
Bradshaw said the allegation was extremely serious. “I have today written to the prime minister to ask if she or any other minister or senior official has at any stage declined a request from any of our security, intelligence or law enforcement agencies to investigate Banks,” he said.
My emphasis. For years we have wondered why there has been no Government action to launch a Mueller-style investigation into foreign influence in the Brexit referendum. Occam’s razor: it wasn’t some vast conspiracy but merely to avoid personal embarrassment of Theresa May, then Home Secretary and now Prime Minister.
On 17 October 2018 Twitter released two previously unseen datasets: nine million tweets from accounts that Twitter believes to be controlled by the Russian Internet Research Agency (IRA).
This short paper lays out an attempt to measure how much activity from Russian state-operated accounts released in the dataset made available by Twitter was targeted at the United Kingdom. Finding UK-related Tweets is not an easy task. By applying a combination of geographic inference, keyword analysis and classification by algorithm, we identified UK-related Tweets sent by these accounts and subjected them to further qualitative and quantitative analytic techniques.
We found there were three phases in Russian influence operations: under-the-radar account building, minor Brexit vote visibility, and larger-scale visibility during the London terror attacks. Russian influence operations linked to the UK were most visible when discussing Islam. Tweets discussing Islam over the period of terror attacks between March and June 2017 were retweeted 25 times more often than their other messages.
Read the paper in full here.
The National Crime Agency is to investigate allegations of multiple criminal offences by Arron Banks and his unofficial leave campaign in the Brexit referendum, prompting calls from some MPs for the process of departing the European Union to be suspended.
The NCA would look into suspicions that a “number of criminal offences may have been committed”, the Electoral Commission said in a statement, saying there were reasonable grounds to suspect Banks was “not the true source” of £8m in funding to the Leave.EU campaign.
The commission said the cases involve Banks, the insurance millionaire who heavily backed leave; Elizabeth Bilney, one of his key associates; Leave.EU itself; the company used to finance it; and “other associated companies and individuals”.
“Social media companies have created, allowed and enabled extremists to move their message from the margins to the mainstream,” said Jonathan A. Greenblatt, chief executive of the Anti-Defamation League, a nongovernmental organization that combats hate speech. “In the past, they couldn’t find audiences for their poison. Now, with a click or a post or a tweet, they can spread their ideas with a velocity we’ve never seen before.”
Facebook said it was investigating the anti-Semitic hashtags on Instagram after The New York Times flagged them. Sarah Pollack, a Facebook spokeswoman, said in a statement that Instagram was seeing new posts related to the shooting on Saturday and that it was “actively reviewing hashtags and content related to these events and removing content that violates our policies.”
YouTube said it has strict policies prohibiting content that promotes hatred or incites violence and added that it takes down videos that violate those rules.
Social media companies have said that identifying and removing hate speech and disinformation — or even defining what constitutes such content — is difficult. Facebook said this year that only 38% of hate speech on its site was flagged by its internal systems. In contrast, its systems pinpointed and took down 96% of what it defined as adult nudity, and 99.5% of terrorist content.
YouTube said users reported nearly 10 million videos from April to June for potentially violating its community guidelines. Just under one million of those videos were found to have broken the rules and were removed, according to the company’s data. YouTube’s automated detection tools also took down an additional 6.8 million videos in that period.
A study by researchers from MIT that was published in March found that falsehoods on Twitter were 70% more likely to be retweeted than accurate news.
Damian Collins in The Guardian:
Arron Banks, the chairman of Leave.EU, has taken the unusual step of writing to each household in my parliamentary constituency of Folkestone and Hythe, telling them that I am a “disgrace” and a “snake in the grass”. He claims that “I have never respected the result of the [Brexit] referendum.” However, he is unable to point to anything in my voting record in parliament to substantiate his assertion.
This letter has certainly provoked a strong response. One constituent has contacted me saying it is, “the most despicable piece of slander and defamation I have seen”. Another wrote calling his letter, “a libellous and ridiculous attack on your character as a member of parliament. Regardless of our political persuasions, I utterly deplore such bully-boy tactics and reject his garbled nonsense.”
It is clear that Banks’s main complaint against me is that I, and the other members of the digital, culture, media and sport select committee that I chair, have called on him to give evidence to our inquiry into disinformation and fake news. He is angry that we asked him about his links to Russia, secret meetings with that country’s ambassador, connections to Cambridge Analytica, and where he found the funds to become the biggest donor in British political history, when so many of his businesses seem to lose money. Like so many would-be bullies, Banks likes to have a go at other people, but hates being questioned about his own affairs.
As an MP and chair of a select committee, it’s my responsibility to make sure we pursue our inquiries without fear or favour. Banks’s strategy is one of intimidation, and when asked recently whether by sending these letters he was trying to put the frighteners on MPs, he said, “there is an element of that.” Well no matter how many letters he writes, I won’t be stopped by him from doing my job.
Banks also complained that in the summer I “shared a platform … with Guardian journalists.” This is true: I spoke at the Byline festival in August with the Orwell prize-winning journalist Carole Cadwalladr – someone who receives abusive messages on social media from Banks and his cronies. Cadwalladr and I were both invited to speak about our respective investigations into disinformation, including the role of Russia in promoting it, and social media in allowing it to spread. Banks hates us talking about this. For those of you who aren’t familiar with the Byline festival, it is an independent summer event that promotes free speech and independent journalism: Banks hates Byline as well.
Collins has done outstanding work at chair of the select committee.
One of Facebook’s major efforts to add transparency to political advertisements is a required “Paid for by” disclosure at the top of each ad supposedly telling users who is paying for political ads that show up in their news feeds.
But on the eve of the 2018 midterm elections, a VICE News investigation found the “Paid for by” feature is easily manipulated and appears to allow anyone to lie about who is paying for a political ad, or to pose as someone paying for the ad.
To test it, VICE News applied to buy fake ads on behalf of all 100 sitting U.S. senators, including ads “Paid for by” by Mitch McConnell and Chuck Schumer. Facebook’s approvals were bipartisan: All 100 sailed through the system, indicating that just about anyone can buy an ad identified as “Paid for by” by a major U.S. politician.
What’s more, all of these approvals were granted to be shared from pages for fake political groups such as “Cookies for Political Transparency” and “Ninja Turtles PAC.” VICE News did not buy any Facebook ads as part of the test; rather, we received approval to include “Paid for by” disclosures for potential ads.
Just all too predictable. I’m not sure that Nick Clegg will have any effect on Facebook’s culture of lying.
This “can’t do it alone” line — which I hear all the time — is ridiculous. Media companies sold political ads for decades without running fraudulent and mislabeled ads. You only “can’t do it alone” if your business model is self-service, user-bought ads with minimal oversight.
In fact, if your business model forces you to outsource quality control to journalists and NGOs in order to avoid having your ad network weaponized by political operatives…maybe the issue is not disclaimers and labeling at all!
Then there’s this tweet:
Just three days after the Pittsburgh synagogue massacre, Facebook was allowing advertisers to target users interested in white genocide — the same myth the alleged shooter believed in https://theintercept.com/2018/11/02/facebook-ads-white-supremacy-pittsburgh-shooting/ …
a Facebook spokesperson told me that the “white genocide conspiracy theory” ad buy didn’t violate the company’s ad rules because it was a category the company itself had generated
Despite telling me that the white genocide ad buy didn’t violate Facebook’s ad policies, and despite having the ad buy approved by Facebook, we received this message [stating that the ads don’t comply] shortly after asking Facebook for comment
Bottom line is this Tweet:
The takeaway here isn’t that Facebook supports white genocide myths. It’s that Facebook built a self-service ad platform with minimal oversight for 2 billion people, and this is what happens when you do that.
Facebook’s current business model is that of a polluter who leaves others to bear the external costs. If your business is only profitable if you skip moderation, you don’t have a proper business. If you can afford moderation you must do it.
A tale of two Twitters. Threats against
@RochelleRitchie from bombing suspect only were taken seriously by Twitter after he was taken into custody. This is what Twitter sent her two weeks ago and what they sent her tonight. https://www.cnn.com/2018/10/26/tech/cesar-sayoc-twitter-response/index.html… @jack
Twitter’s still not the hang of this content moderation process, has it?