Category: Uncategorized

The ‘Platform’ Excuse Is Dying

The Atlantic:

Platforms might have been something new, but they sure did a lot of things that previous information intermediaries had. “Their choices about what can appear, how it is organized, how it is monetized, what can be removed and why, and what the technical architecture allows and prohibits, are all real and substantive interventions into the contours of public discourse,” Gillespie wrote.

Yet for years the internet platforms mostly denied that they were much of an intervention at all. When Senator Joe Lieberman tried to get YouTube to take down what he characterized as Islamist training videos in 2008, the YouTube team responded with free-speech bromides. “YouTube encourages free speech and defends everyone’s right to express unpopular points of view,” they wrote. “We believe that YouTube is a richer and more relevant platform for users precisely because it hosts a diverse range of views, and rather than stifle debate we allow our users to view all acceptable content and make up their own minds.”

Facebook drew on that sense of being “just a platform” after conservatives challenged what they saw as the company’s liberal bias in mid-2016. Zuckerberg began to use—at least in public—the line that Facebook was “a platform for all ideas.”

But that prompted many people to ask: What about awful, hateful ideas? Why, exactly, should Facebook host them, algorithmically serve them up, or lead users to groups filled with them?

These companies are continuing to make their platform arguments, but every day brings more conflicts that they seem unprepared to resolve. The platform defense used to shut down the why questions: Why should YouTube host conspiracy content? Why should Facebook host provably false information? Facebook, YouTube, and their kin keep trying to answer, We’re platforms! But activists and legislators are now saying, So what? “I think they have proven—by not taking down something they know is false—that they were willing enablers of the Russian interference in our election,” Nancy Pelosi said after the altered-video fracas.

Given how powerful and flexible as the rhetoric has been, the idea of the platform will not simply exit stage right. “The platform” once perfumed the naive, meretricious, or odious actions that allowed these companies to expand. But as the term rots, it has begun to stink, and anybody who catches a whiff of it might notice what had been masked. These companies are out to grow their businesses, and every other thing is a means to that end.

Journalism and Craig Newmark

Dave Winer hits the nail on the head:

Journalism has been very conflicted about Craig Newmark. Truth is he isn’t responsible for anything other than making a product that people wanted. The news industry could have done it, but for some reason didn’t.

 

Does the news reflect what we die from?

Our World In Data:

The first column represents each cause’s share of US deaths; the second the share of Google searches each receives; third, the relative article mentions in the New York Times; and finally article mentions in The Guardian.

The coverage in both newspapers here is strikingly similar. And the discrepancy between what we die actually from and what we get informed of in the media is what stands out:

  • around one-third of the considered causes of deaths resulted from heart disease, yet this cause of death receives only 2-3 percent of Google searches and media coverage;
  • just under one-third of the deaths came from cancer; we actually google cancer a lot (37 percent of searches) and it is a popular entry here on our site; but it receives only 13-14 percent of media coverage;
  • we searched for road incidents more frequently than their share of deaths, however, they receive much less attention in the news;
  • when it comes to deaths from strokes, Google searches and media coverage are surprisingly balanced;
  • the largest discrepancies concern violent forms of death: suicide, homicide and terrorism. All three receive much more relative attention in Google searches and media coverage than their relative share of deaths. When it comes to the media coverage on causes of death, violent deaths account for more than two-thirds of coverage in the New York Times and The Guardian but account for less than 3 percent of the total deaths in the US.

What’s interesting is that Americans search on Google is a much closer reflection of what kills us than what is presented in the media. One way to think about it is that media outlets may produce content that they think readers are most interested in, but this is not necessarily reflected in our preferences when we look for information ourselves.

causes-of-death-in-usa-vs.-media-coverage

Regulating Facebook: A Proposal

Charles Fitzgerald at Platformonomics:

Instead of “breaking up” Facebook, retroactively undoing acquisitions, or other rage-based remedies, the proposal here is to turn the social graph underlying Facebook (and only the social graph), into a separate, regulated utility. The social graph is a fancy term to describe the basic profile and list of friends associated with every Facebook user.

Today, different parts of the sprawling Facebook application call a common application programming interface (API) to get this list of your friends whenever they need that information. They might use this list to know whose content to use when compiling your newsfeed or give you options to message or poke or whatever it is people actually do to each another on Facebook (admittedly I’m not much of a user myself).

Instead of whacking at them with a hatchet, we’d take a precision scalpel to Facebook and excise the social graph API and its underlying data (and to the degree Facebook is consolidating the social graphs of Facebook, Instagram and WhatsApp, that makes it easier for our new utility, but we can operate one time or three times as needed). The new regulated Social Graph Utility would control and operate the social graph. Facebook would naturally continue to access the social graph (and be required to do so under specific terms and conditions). But anyone else outside Facebook could also access the social graph through an API governed by a stringent license.

Interesting idea but unnecessary if you just Ban Digital Advertising.

Regulating the tech giants

It isn’t often I disagree with John Naughton (or Benedict Evans) but John’s supportive quote from Benedict‘s recent newsletter is one such occasion.  My emphasis:

I think there are two sets of issues to consider here. First, when we look at Google, Facebook, Amazon and perhaps Apple, there’s a tendency to conflate concerns about the absolute size and market power of these companies (all of which are of course debatable) with concerns about specific problems: privacy, radicalization and filter bubbles, spread of harmful content, law enforcement access to encrypted messages and so on, all the way down to very micro things like app store curation. Breaking up Facebook by splitting off Instagram and WhatsApp would reduce its market power, but would have no effect at all on rumors spreading on WhatsApp, school bullying on Instagram or abusive content in the newsfeed. In the same way, splitting Youtube apart from Google wouldn’t solve radicalization. So which problem are you trying to solve?

Breaking up giants should allow competition to resume.  That means new entrants who just might compete on privacy or other behaviours we want to encourage.  Let’s find out what people want.  Maybe a hygienic version of Facebook’s news feed or even pay a subscription instead of adverts?

Second, anti-trust theory, on both the diagnosis side and the remedy side, seems to be flummoxed when faced by products that are free or as cheap as possible, and that do not rely on familiar kinds of restrictive practices (the tying of Standard Oil) for their market power. The US in particular has tended to focus exclusively on price, where the EU has looked much more at competition, but neither has a good account of what exactly is wrong with Amazon (if anything – and of course it is still less than half the size of Walmart in the USA), or indeed with Facebook. Neither is there a robust theory of what, specifically, to do about it. ‘Break them up’ seems to come more from familiarity than analysis: it’s not clear how much real effect splitting off IG and WA would have on the market power of the core newsfeed, and Amazon’s retail business doesn’t have anything to split off (and no, AWS isn’t subsidizing it). We saw the same thing in Elizabeth Warren’s idea that platform owners can’t be on their own platform – which would actually mean that Google would be banned from making Google Maps for Android. So, we’ve got to the point that a lot of people want to do something, but not really, much further.

Yes, anti-trust laws need to evolve (just as anti-trust theory is slowly evolving).  But a lot could be done with the interpretation and implementation of the laws we have.  The existing focus on consumers.  So who are the consumers?   The people who pay the money.  If you want to advertise you’re faced with an effective monopoly.  Let’s fix that.

YouTube Executives Ignored Warnings, Letting Toxic Videos Run Rampant

Bloomberg:

The conundrum isn’t just that videos questioning the moon landing or the efficacy of vaccines are on YouTube. The massive “library,” generated by users with little editorial oversight, is bound to have untrue nonsense. Instead, YouTube’s problem is that it allows the nonsense to flourish. And, in some cases, through its powerful artificial intelligence system, it even provides the fuel that lets it spread.

Wojcicki and her deputies know this. In recent years, scores of people inside YouTube and Google, its owner, raised concerns about the mass of false, incendiary and toxic content that the world’s largest video site surfaced and spread. One employee wanted to flag troubling videos, which fell just short of the hate speech rules, and stop recommending them to viewers. Another wanted to track these videos in a spreadsheet to chart their popularity. A third, fretful of the spread of “alt-right” video bloggers, created an internal vertical that showed just how popular they were. Each time they got the same basic response: Don’t rock the boat.

The company spent years chasing one business goal above others: “Engagement,” a measure of the views, time spent and interactions with online videos. Conversations with over twenty people who work at, or recently left, YouTube reveal a corporate leadership unable or unwilling to act on these internal alarms for fear of throttling engagement.

Wojcicki would “never put her fingers on the scale,” said one person who worked for her. “Her view was, ‘My job is to run the company, not deal with this.’” This person, like others who spoke to Bloomberg News, asked not to be identified because of a worry of retaliation.

History will not be kind to people like Wojcicki.

The Filter Bubble is Actually a Decision Bubble

Thomas Baekdal:

Something we see all the time is that there are many people who end up believing something that simply isn’t true, and it is quite painful to watch.

Let me give you a simple example. Take the flat-Earthers. I mean… they are clearly bonkers in their belief that the world is flat, and when you look at this you might think that this is because they are living in a filter bubble.

But it isn’t.

You see, the problem with the flat-Earthers isn’t that they have never heard that the Earth is round. They are fully aware that this is what the rest of us believe in. They have seen all our articles and they have been presented with all the proof.

In fact, when you look at how flat-Earthers interact online, you will notice that they are often commenting or attacking scientists any time they post a video or an article about space.

So flat-Earthers do not live in a filter bubble. They are very aware that the rest of us know the Earth is actually round, because they spend every single day attacking us for it.

It’s the same with all the other examples where we think people are living in a filter bubble. Take the anti-vaccination lunatics. They too are fully aware that society as a whole, not to mention medical professionals, all recommend that you get vaccinated. And, they also know that the rest of us think about them as idiots.

They are not living in a filter bubble, but something has happened that has caused them to choose not to believe what is general knowledge.

Well, a normal person believes that the Earth is round, because that seems obvious. A normal person vaccinates their kids, because that’s what the doctors recommend. Normal people believe in climate change, because… well… we can see it with our own eyes.

So, by default, normal people are fine. But then in the media, we often report about things in such a way that we create doubts.

There are many terrible examples of this. One example is ITV’s This Morning, which is one of the top morning TV shows in the UK.

It is often doing things like this tweet:

bubble2

My emphasis.

‘Sustained and ongoing’ disinformation assault targets Dem presidential candidates

Politico:

A wide-ranging disinformation campaign aimed at Democratic 2020 candidates is already underway on social media, with signs that foreign state actors are driving at least some of the activity.

The main targets appear to be Sens. Kamala Harris (D-Calif.), Elizabeth Warren (D-Mass.) and Bernie Sanders (I-Vt.), and former Rep. Beto O’Rourke (D-Texas), four of the most prominent announced or prospective candidates for president.

A POLITICO review of recent data extracted from Twitter and from other platforms, as well as interviews with data scientists and digital campaign strategists, suggests that the goal of the coordinated barrage appears to be undermining the nascent candidacies through the dissemination of memes, hashtags, misinformation and distortions of their positions. But the divisive nature of many of the posts also hints at a broader effort to sow discord and chaos within the Democratic presidential primary.

The cyber propaganda — which frequently picks at the rawest, most sensitive issues in public discourse — is being pushed across a variety of platforms and with a more insidious approach than in the 2016 presidential election, when online attacks designed to polarize and mislead voters first surfaced on a massive scale.