Charles Fitzgerald at Platformonomics:
Instead of “breaking up” Facebook, retroactively undoing acquisitions, or other rage-based remedies, the proposal here is to turn the social graph underlying Facebook (and only the social graph), into a separate, regulated utility. The social graph is a fancy term to describe the basic profile and list of friends associated with every Facebook user.
Today, different parts of the sprawling Facebook application call a common application programming interface (API) to get this list of your friends whenever they need that information. They might use this list to know whose content to use when compiling your newsfeed or give you options to message or poke or whatever it is people actually do to each another on Facebook (admittedly I’m not much of a user myself).
Instead of whacking at them with a hatchet, we’d take a precision scalpel to Facebook and excise the social graph API and its underlying data (and to the degree Facebook is consolidating the social graphs of Facebook, Instagram and WhatsApp, that makes it easier for our new utility, but we can operate one time or three times as needed). The new regulated Social Graph Utility would control and operate the social graph. Facebook would naturally continue to access the social graph (and be required to do so under specific terms and conditions). But anyone else outside Facebook could also access the social graph through an API governed by a stringent license.
Interesting idea but unnecessary if you just Ban Digital Advertising.
I think there are two sets of issues to consider here. First, when we look at Google, Facebook, Amazon and perhaps Apple, there’s a tendency to conflate concerns about the absolute size and market power of these companies (all of which are of course debatable) with concerns about specific problems: privacy, radicalization and filter bubbles, spread of harmful content, law enforcement access to encrypted messages and so on, all the way down to very micro things like app store curation. Breaking up Facebook by splitting off Instagram and WhatsApp would reduce its market power, but would have no effect at all on rumors spreading on WhatsApp, school bullying on Instagram or abusive content in the newsfeed. In the same way, splitting Youtube apart from Google wouldn’t solve radicalization. So which problem are you trying to solve?
Breaking up giants should allow competition to resume. That means new entrants who just might compete on privacy or other behaviours we want to encourage. Let’s find out what people want. Maybe a hygienic version of Facebook’s news feed or even pay a subscription instead of adverts?
Second, anti-trust theory, on both the diagnosis side and the remedy side, seems to be flummoxed when faced by products that are free or as cheap as possible, and that do not rely on familiar kinds of restrictive practices (the tying of Standard Oil) for their market power. The US in particular has tended to focus exclusively on price, where the EU has looked much more at competition, but neither has a good account of what exactly is wrong with Amazon (if anything – and of course it is still less than half the size of Walmart in the USA), or indeed with Facebook. Neither is there a robust theory of what, specifically, to do about it. ‘Break them up’ seems to come more from familiarity than analysis: it’s not clear how much real effect splitting off IG and WA would have on the market power of the core newsfeed, and Amazon’s retail business doesn’t have anything to split off (and no, AWS isn’t subsidizing it). We saw the same thing in Elizabeth Warren’s idea that platform owners can’t be on their own platform – which would actually mean that Google would be banned from making Google Maps for Android. So, we’ve got to the point that a lot of people want to do something, but not really, much further.
Yes, anti-trust laws need to evolve (just as anti-trust theory is slowly evolving). But a lot could be done with the interpretation and implementation of the laws we have. The existing focus on consumers. So who are the consumers? The people who pay the money. If you want to advertise you’re faced with an effective monopoly. Let’s fix that.
The conundrum isn’t just that videos questioning the moon landing or the efficacy of vaccines are on YouTube. The massive “library,” generated by users with little editorial oversight, is bound to have untrue nonsense. Instead, YouTube’s problem is that it allows the nonsense to flourish. And, in some cases, through its powerful artificial intelligence system, it even provides the fuel that lets it spread.
Wojcicki and her deputies know this. In recent years, scores of people inside YouTube and Google, its owner, raised concerns about the mass of false, incendiary and toxic content that the world’s largest video site surfaced and spread. One employee wanted to flag troubling videos, which fell just short of the hate speech rules, and stop recommending them to viewers. Another wanted to track these videos in a spreadsheet to chart their popularity. A third, fretful of the spread of “alt-right” video bloggers, created an internal vertical that showed just how popular they were. Each time they got the same basic response: Don’t rock the boat.
The company spent years chasing one business goal above others: “Engagement,” a measure of the views, time spent and interactions with online videos. Conversations with over twenty people who work at, or recently left, YouTube reveal a corporate leadership unable or unwilling to act on these internal alarms for fear of throttling engagement.
Wojcicki would “never put her fingers on the scale,” said one person who worked for her. “Her view was, ‘My job is to run the company, not deal with this.’” This person, like others who spoke to Bloomberg News, asked not to be identified because of a worry of retaliation.
History will not be kind to people like Wojcicki.
Something we see all the time is that there are many people who end up believing something that simply isn’t true, and it is quite painful to watch.
Let me give you a simple example. Take the flat-Earthers. I mean… they are clearly bonkers in their belief that the world is flat, and when you look at this you might think that this is because they are living in a filter bubble.
But it isn’t.
You see, the problem with the flat-Earthers isn’t that they have never heard that the Earth is round. They are fully aware that this is what the rest of us believe in. They have seen all our articles and they have been presented with all the proof.
In fact, when you look at how flat-Earthers interact online, you will notice that they are often commenting or attacking scientists any time they post a video or an article about space.
So flat-Earthers do not live in a filter bubble. They are very aware that the rest of us know the Earth is actually round, because they spend every single day attacking us for it.
It’s the same with all the other examples where we think people are living in a filter bubble. Take the anti-vaccination lunatics. They too are fully aware that society as a whole, not to mention medical professionals, all recommend that you get vaccinated. And, they also know that the rest of us think about them as idiots.
They are not living in a filter bubble, but something has happened that has caused them to choose not to believe what is general knowledge.
Well, a normal person believes that the Earth is round, because that seems obvious. A normal person vaccinates their kids, because that’s what the doctors recommend. Normal people believe in climate change, because… well… we can see it with our own eyes.
So, by default, normal people are fine. But then in the media, we often report about things in such a way that we create doubts.
There are many terrible examples of this. One example is ITV’s This Morning, which is one of the top morning TV shows in the UK.
It is often doing things like this tweet:
A book about Ladders. For cats. In Switzerland.
A wide-ranging disinformation campaign aimed at Democratic 2020 candidates is already underway on social media, with signs that foreign state actors are driving at least some of the activity.
The main targets appear to be Sens. Kamala Harris (D-Calif.), Elizabeth Warren (D-Mass.) and Bernie Sanders (I-Vt.), and former Rep. Beto O’Rourke (D-Texas), four of the most prominent announced or prospective candidates for president.
A POLITICO review of recent data extracted from Twitter and from other platforms, as well as interviews with data scientists and digital campaign strategists, suggests that the goal of the coordinated barrage appears to be undermining the nascent candidacies through the dissemination of memes, hashtags, misinformation and distortions of their positions. But the divisive nature of many of the posts also hints at a broader effort to sow discord and chaos within the Democratic presidential primary.
The cyber propaganda — which frequently picks at the rawest, most sensitive issues in public discourse — is being pushed across a variety of platforms and with a more insidious approach than in the 2016 presidential election, when online attacks designed to polarize and mislead voters first surfaced on a massive scale.
Older Americans are disproportionately more likely to share fake news on Facebook, according to a new analysis by researchers at New York and Princeton Universities. Older users shared more fake news than younger ones regardless of education, sex, race, income, or how many links they shared. In fact, age predicted their behavior better than any other characteristic — including party affiliation.
The role of fake news in influencing voter behavior has been debated continuously since Donald Trump’s surprising victory over Hillary Clinton in 2016. At least one study has found that pro-Trump fake news likely persuaded some people to vote for him over Clinton, influencing the election’s outcome. Another study found that relatively few people clicked on fake news links — but that their headlines likely traveled much further via the News Feed, making it difficult to quantify their true reach. The finding that older people are more likely to share fake news could help social media users and platforms design more effective interventions to stop them from being misled.
Today’s study, published in Science Advances, examined user behavior in the months before and after the 2016 US presidential election. In early 2016, the academics started working with research firm YouGov to assemble a panel of 3,500 people, which included both Facebook users and non-users. On November 16th, just after the election, they asked Facebook users on the panel to install an application that allowed them to share data including public profile fields, religious and political views, posts to their own timelines, and the pages that they followed. Users could opt in or out of sharing individual categories of data, and researchers did not have access to the News Feeds or data about their friends.
John Naughton in The Guardian:
At last, we’re getting somewhere. Two years after Brexit and the election of Donald Trump, we’re finally beginning to understand the nature and extent of Russian interference in the democratic processes of two western democracies. The headlines are: the interference was much greater than what was belatedly discovered and/or admitted by the social media companies; it was more imaginative, ingenious and effective than we had previously supposed; and it’s still going on.
We know this because the US Senate select committee on intelligencecommissioned major investigations by two independent teams. One involved New Knowledge, a US cybersecurity firm, plus researchers from Columbia University in New York and a mysterious outfit called Canfield Research. The other was a team comprising the Oxford Internet Institute’s “Computational Propaganda” project and Graphika, a company specialising in analysing social media.
We have been warned.
Jonathan Albright in Medium:
In 2016, in our discussions about Facebook and the election, we tended to focus mostly on Pages. And paid “ads.” Well, it’s 2018, and this time around, we have another problem to talk about: Facebook Groups. In my extensive look into Facebook, introduced in the previous post, I’ve found that groups have become the preferred base for coordinated information influence activities on the platform. This is a shift that reflects the product’s most important advantage: the posts and activities of the actors who join them are hidden within the Group. Well, at least until they choose to share them.
Inside these political Groups, numbering anywhere from the tens of thousands to the hundreds of thousands of users, activities are perfectly obscured. However, as I will show, the effects of these activities can be significant. The individual posts, photos, events, and files shared within these groups are generally not discoverable through the platform’s standard search feature, or through the APIs that allow content to be retrieved from public Facebook pages. Yet once the posts leave these groups, they can gain traction and initiate large-scale information-seeding and political influence campaigns.
As a result, the actors who used to operate on Pages have now colonized Groups and use them more than ever. This analysis found disinformation and conspiracies being seeded across hundreds of different groups, most falling into what would best be described as political “astroturfing.”
Yes, Facebook groups will end in tears too.