Journalism and Craig Newmark

Dave Winer hits the nail on the head:

Journalism has been very conflicted about Craig Newmark. Truth is he isn’t responsible for anything other than making a product that people wanted. The news industry could have done it, but for some reason didn’t.

 

Does the news reflect what we die from?

Our World In Data:

The first column represents each cause’s share of US deaths; the second the share of Google searches each receives; third, the relative article mentions in the New York Times; and finally article mentions in The Guardian.

The coverage in both newspapers here is strikingly similar. And the discrepancy between what we die actually from and what we get informed of in the media is what stands out:

  • around one-third of the considered causes of deaths resulted from heart disease, yet this cause of death receives only 2-3 percent of Google searches and media coverage;
  • just under one-third of the deaths came from cancer; we actually google cancer a lot (37 percent of searches) and it is a popular entry here on our site; but it receives only 13-14 percent of media coverage;
  • we searched for road incidents more frequently than their share of deaths, however, they receive much less attention in the news;
  • when it comes to deaths from strokes, Google searches and media coverage are surprisingly balanced;
  • the largest discrepancies concern violent forms of death: suicide, homicide and terrorism. All three receive much more relative attention in Google searches and media coverage than their relative share of deaths. When it comes to the media coverage on causes of death, violent deaths account for more than two-thirds of coverage in the New York Times and The Guardian but account for less than 3 percent of the total deaths in the US.

What’s interesting is that Americans search on Google is a much closer reflection of what kills us than what is presented in the media. One way to think about it is that media outlets may produce content that they think readers are most interested in, but this is not necessarily reflected in our preferences when we look for information ourselves.

causes-of-death-in-usa-vs.-media-coverage

Regulating the tech giants

It isn’t often I disagree with John Naughton (or Benedict Evans) but John’s supportive quote from Benedict‘s recent newsletter is one such occasion.  My emphasis:

I think there are two sets of issues to consider here. First, when we look at Google, Facebook, Amazon and perhaps Apple, there’s a tendency to conflate concerns about the absolute size and market power of these companies (all of which are of course debatable) with concerns about specific problems: privacy, radicalization and filter bubbles, spread of harmful content, law enforcement access to encrypted messages and so on, all the way down to very micro things like app store curation. Breaking up Facebook by splitting off Instagram and WhatsApp would reduce its market power, but would have no effect at all on rumors spreading on WhatsApp, school bullying on Instagram or abusive content in the newsfeed. In the same way, splitting Youtube apart from Google wouldn’t solve radicalization. So which problem are you trying to solve?

Breaking up giants should allow competition to resume.  That means new entrants who just might compete on privacy or other behaviours we want to encourage.  Let’s find out what people want.  Maybe a hygienic version of Facebook’s news feed or even pay a subscription instead of adverts?

Second, anti-trust theory, on both the diagnosis side and the remedy side, seems to be flummoxed when faced by products that are free or as cheap as possible, and that do not rely on familiar kinds of restrictive practices (the tying of Standard Oil) for their market power. The US in particular has tended to focus exclusively on price, where the EU has looked much more at competition, but neither has a good account of what exactly is wrong with Amazon (if anything – and of course it is still less than half the size of Walmart in the USA), or indeed with Facebook. Neither is there a robust theory of what, specifically, to do about it. ‘Break them up’ seems to come more from familiarity than analysis: it’s not clear how much real effect splitting off IG and WA would have on the market power of the core newsfeed, and Amazon’s retail business doesn’t have anything to split off (and no, AWS isn’t subsidizing it). We saw the same thing in Elizabeth Warren’s idea that platform owners can’t be on their own platform – which would actually mean that Google would be banned from making Google Maps for Android. So, we’ve got to the point that a lot of people want to do something, but not really, much further.

Yes, anti-trust laws need to evolve (just as anti-trust theory is slowly evolving).  But a lot could be done with the interpretation and implementation of the laws we have.  The existing focus on consumers.  So who are the consumers?   The people who pay the money.  If you want to advertise you’re faced with an effective monopoly.  Let’s fix that.

YouTube Executives Ignored Warnings, Letting Toxic Videos Run Rampant

Bloomberg:

The conundrum isn’t just that videos questioning the moon landing or the efficacy of vaccines are on YouTube. The massive “library,” generated by users with little editorial oversight, is bound to have untrue nonsense. Instead, YouTube’s problem is that it allows the nonsense to flourish. And, in some cases, through its powerful artificial intelligence system, it even provides the fuel that lets it spread.

Wojcicki and her deputies know this. In recent years, scores of people inside YouTube and Google, its owner, raised concerns about the mass of false, incendiary and toxic content that the world’s largest video site surfaced and spread. One employee wanted to flag troubling videos, which fell just short of the hate speech rules, and stop recommending them to viewers. Another wanted to track these videos in a spreadsheet to chart their popularity. A third, fretful of the spread of “alt-right” video bloggers, created an internal vertical that showed just how popular they were. Each time they got the same basic response: Don’t rock the boat.

The company spent years chasing one business goal above others: “Engagement,” a measure of the views, time spent and interactions with online videos. Conversations with over twenty people who work at, or recently left, YouTube reveal a corporate leadership unable or unwilling to act on these internal alarms for fear of throttling engagement.

Wojcicki would “never put her fingers on the scale,” said one person who worked for her. “Her view was, ‘My job is to run the company, not deal with this.’” This person, like others who spoke to Bloomberg News, asked not to be identified because of a worry of retaliation.

History will not be kind to people like Wojcicki.

The Filter Bubble is Actually a Decision Bubble

Thomas Baekdal:

Something we see all the time is that there are many people who end up believing something that simply isn’t true, and it is quite painful to watch.

Let me give you a simple example. Take the flat-Earthers. I mean… they are clearly bonkers in their belief that the world is flat, and when you look at this you might think that this is because they are living in a filter bubble.

But it isn’t.

You see, the problem with the flat-Earthers isn’t that they have never heard that the Earth is round. They are fully aware that this is what the rest of us believe in. They have seen all our articles and they have been presented with all the proof.

In fact, when you look at how flat-Earthers interact online, you will notice that they are often commenting or attacking scientists any time they post a video or an article about space.

So flat-Earthers do not live in a filter bubble. They are very aware that the rest of us know the Earth is actually round, because they spend every single day attacking us for it.

It’s the same with all the other examples where we think people are living in a filter bubble. Take the anti-vaccination lunatics. They too are fully aware that society as a whole, not to mention medical professionals, all recommend that you get vaccinated. And, they also know that the rest of us think about them as idiots.

They are not living in a filter bubble, but something has happened that has caused them to choose not to believe what is general knowledge.

Well, a normal person believes that the Earth is round, because that seems obvious. A normal person vaccinates their kids, because that’s what the doctors recommend. Normal people believe in climate change, because… well… we can see it with our own eyes.

So, by default, normal people are fine. But then in the media, we often report about things in such a way that we create doubts.

There are many terrible examples of this. One example is ITV’s This Morning, which is one of the top morning TV shows in the UK.

It is often doing things like this tweet:

bubble2

My emphasis.

‘Sustained and ongoing’ disinformation assault targets Dem presidential candidates

Politico:

A wide-ranging disinformation campaign aimed at Democratic 2020 candidates is already underway on social media, with signs that foreign state actors are driving at least some of the activity.

The main targets appear to be Sens. Kamala Harris (D-Calif.), Elizabeth Warren (D-Mass.) and Bernie Sanders (I-Vt.), and former Rep. Beto O’Rourke (D-Texas), four of the most prominent announced or prospective candidates for president.

A POLITICO review of recent data extracted from Twitter and from other platforms, as well as interviews with data scientists and digital campaign strategists, suggests that the goal of the coordinated barrage appears to be undermining the nascent candidacies through the dissemination of memes, hashtags, misinformation and distortions of their positions. But the divisive nature of many of the posts also hints at a broader effort to sow discord and chaos within the Democratic presidential primary.

The cyber propaganda — which frequently picks at the rawest, most sensitive issues in public discourse — is being pushed across a variety of platforms and with a more insidious approach than in the 2016 presidential election, when online attacks designed to polarize and mislead voters first surfaced on a massive scale.

People older than 65 share the most fake news, a new study finds

The Verge:

Older Americans are disproportionately more likely to share fake news on Facebook, according to a new analysis by researchers at New York and Princeton Universities. Older users shared more fake news than younger ones regardless of education, sex, race, income, or how many links they shared. In fact, age predicted their behavior better than any other characteristic — including party affiliation.

The role of fake news in influencing voter behavior has been debated continuously since Donald Trump’s surprising victory over Hillary Clinton in 2016. At least one study has found that pro-Trump fake news likely persuaded some people to vote for him over Clinton, influencing the election’s outcome. Another study found that relatively few people clicked on fake news links — but that their headlines likely traveled much further via the News Feed, making it difficult to quantify their true reach. The finding that older people are more likely to share fake news could help social media users and platforms design more effective interventions to stop them from being misled.

Today’s study, published in Science Advances, examined user behavior in the months before and after the 2016 US presidential election. In early 2016, the academics started working with research firm YouGov to assemble a panel of 3,500 people, which included both Facebook users and non-users. On November 16th, just after the election, they asked Facebook users on the panel to install an application that allowed them to share data including public profile fields, religious and political views, posts to their own timelines, and the pages that they followed. Users could opt in or out of sharing individual categories of data, and researchers did not have access to the News Feeds or data about their friends.

Social media is an existential threat to our idea of democracy

John Naughton in The Guardian:

At last, we’re getting somewhere. Two years after Brexit and the election of Donald Trump, we’re finally beginning to understand the nature and extent of Russian interference in the democratic processes of two western democracies. The headlines are: the interference was much greater than what was belatedly discovered and/or admitted by the social media companies; it was more imaginative, ingenious and effective than we had previously supposed; and it’s still going on.

We know this because the US Senate select committee on intelligencecommissioned major investigations by two independent teams. One involved New Knowledge, a US cybersecurity firm, plus researchers from Columbia University in New York and a mysterious outfit called Canfield Research. The other was a team comprising the Oxford Internet Institute’s “Computational Propaganda” project and Graphika, a company specialising in analysing social media.

We have been warned.