The new surveillance capitalism

John Naughton in Prospect:

Consider the so-called “Right to Be Forgotten” granted by the European Court of Justice (ECJ) in 2014, which gives European Union citizens the right to petition Google to have information about them removed from the company’s search results in Europe. To call it a “right to be forgotten” is not strictly accurate; it is merely a right to request that certain information not be listed in Google’s European search results—although in our networked world, this almost amounts to the same thing. If the dominant search engine doesn’t find you, then you have effectively ceased to exist. The ECJ ruling bows to the reality that Google has a unique capacity to make or break reputations.

Google itself has been given responsibility for managing the complaints and adjudicating who gets to be “forgotten,” effectively outsourcing a judicial responsibility to a private company. Territorial sovereignty, the kind exercised by elected governments, has been supplanted by what the legal scholar Frank Pasquale calls “functional sovereignty.” The digital giants, Pasquale maintains, “are no longer market participants.” Rather, “they are market makers” in their fields, “able to exert regulatory control over the terms on which others can sell goods and services.” Moreover, he says, “they aspire to displace more government roles over time, replacing the logic of territorial sovereignty with functional sovereignty. In functional arenas from room-letting to transportation to commerce, persons will be increasingly subject to corporate, rather than democratic, control.”

There is no issue so big that the tech companies think they can’t handle it themselves. When the issue of fake news flared up amid the US election, for example, the first response of Mark Zuckerberg, Facebook’s CEO, was a mixture of denial and incredulity. Then, as the evidence mounted that his advertising machine had been weaponised by dark political actors, he pivoted rapidly from that incredulity to scepticism and then—as the evidence became incontrovertible—to a technocratic determination to “solve” the problem. By the end of September, he was issuing a personal Yom Kippur post on Facebook pleading for “forgiveness” in light of the way that “my work was used to divide people.”

But Facebook has two conflicts of interest that inhibit it from fixing the problems. First, surveillance capitalism requires the maximisation of “user engagement,” to create the data that is to be monetised. And it turns out that Facebook users are often more engaged by fake news than they are by mundane truths. The much-vaunted, pending overhaul of Facebook’s algorithm to give priority to material shared between individuals represents a retreat from real news just as much as it does from fake. That could have adverse implications for the responsible media, and meanwhile, of course, that emphasis on shares only deepens the “engagement.” The second conflict stems from the fact that if Zuckerberg were to accept editorial responsibility for what is posted by his website’s users it would effectively destroy his company, given that there aren’t enough administrators in the world to vet what gets posted to Facebook in a single second.

This hasn’t stopped some from demanding that social media organisations accept responsibility for what appears on their sites. And the Germans have passed legislation that mandates swingeing penalties on platforms that do not take down offending content in a matter of hours. But like the right to be forgotten, this statute delegates to private companies the task of deciding what shall and shall not be published in a democracy—another illustration of functional sovereignty replacing its territorial counterpart.

Back in the 1980s, the cultural critic Neil Postman argued that our future would be bracketed by the nightmares of two British novelists: George Orwell, who thought we would be destroyed by the things we fear; and Aldous Huxley, who believed that our undoing would be the things that delighted us. With the aid of digital technology, we are managing to achieve both nightmares at once. We click compulsively on health scares and other anxieties while Big Brother watches, but are also distracted by dubious political claims that make us feel good and reinforce our prejudices.

 

Flock’n Roll

This explorable illustrates of an intuitive dynamic model for collective motion (swarming) in animal groups. The model can be used to describe collective behavior observed in schooling fish or flocking birds, for example.

More explorables.

Trolls on Twitter: How Mainstream and Local News Outlets Were Used to Drive a Polarized News Agenda

Berkman Klein Center:

I’ll attempt to summarize the main themes in this write-up, but the broader linking patterns of the troll accounts show three initial things:

Trolls are using real news — and in particular local news — to drive reactionary news coverage, set the daily news agenda, and target local journalists and community influencers to follow certain stories.

At certain points, such as the week following Hillary Clinton’s 9/11 illness in 2016, all the categories of trolls — meaning accounts focused on local, BLM, and the far-right hyper-partisan accounts— come together to push out the full gamut of polarizing news coverage. Specifically, during the time around 9/15/16 — 9/18/16, (see the Tableau image above with the volume peaks) many links to stories that were broadcast originated from Breitbart, The Daily Caller, and The Gateway Pundit. Yet a significant portion also came from outlets such as The Hill, The Washington Post, as well as the full gamut of regional “local” media.

There is a clear pattern of setting up accounts to amplify “real” local news coverage in American cities struggling with racial, class, and structural social divides. The linking activity show a concerted effort in cities like Chicago, Houston, St. Louis, Kansas City, Baton Rouge, and New Orleans. In most cases, the tweets in the data set were categorized as “original tweets” but these tweets often appear like a retweet. Some links also appeared to come directly out of local news organizations own RSS feeds. The use of Bitly, regular RSS feeds, (Google) Feedproxy, and Trendolizer were prominent themes in the local linkage patterns.

Google Autocomplete Still Makes Vile Suggestions

Wired:

It’s true, as Gingras said, that these algorithms will never be perfect. But that shouldn’t absolve Google. This isn’t some naturally occurring phenomenon; it’s a problem of Google’s own creation.

The question is whether the company is taking enough steps to fix the problems they’ve created systematically, instead of tinkering with individual issues as they arise. If Alphabet, Google’s parent company with a nearly $700 billion market cap, more than 70,000 employees, and thousands of so-called raters around the world vetting its search results, really does throw all available resources at eradicating ugly and biased results, how is it that over the course of just about a dozen searches, I found seven that were clearly undesirable, both because they’re offensive, and because they’re uninformative? Of all the things I could be asking about white supremacy, whether it’s “good” hardly feels like the most relevant question.

“It creates a world where thoughts are put in your head that you haven’t thought to think about,” Venkatasubramanian says. “There is a value in autocomplete, but it becomes a question of when that utility collides with the harm.”

 

Merely a Warning that a Noun is Coming

Bee Wilson in London Review of Books:

In 1930, John Brophy and Eric Partridge published a collection of British songs and slang from the war. They claimed that soldiers used the word ‘fucking’ so often that it was merely a warning ‘that a noun is coming’. In a normal situation, swear words are used for emphasis, but Brophy and Partridge found that obscenity was so over-used among the military in the Great War that if a soldier wanted to express emotion he wouldn’t swear. ‘Thus if a sergeant said, “Get your —ing rifles!” it was understood as a matter of routine. But if he said, “Get your rifles!” there was an immediate implication of urgency and danger.’

Prayer

Scott Galloway:

I’m 100% certain there is no god. At least not the Morgan Freeman / Lifetime channel / Fox version of God. However, I do pray. Just as writing down your goals makes them more likely to come to fruition, being grateful has been proven to increase health and life expectancy. Writing about your aspirations and articulating all the things you’re grateful for is a form of prayer.

An interesting perspective.

The more Facebook examines itself, the more fault it finds

Casey Newton in The Verge:

So here’s what we now think we know about Facebook and democracy — or, at least, what Facebook no longer disputes:

  • Facebook’s targeting tools are easily abused by bad actors, including foreign governments. Russia’s use of these tools in the 2016 US presidential election was of course instrumental in kicking off this entire discussion. (Some of it is still online!)
  • Sophisticated misinformation campaigns will defeat Facebook’s best efforts to defeat them, at least some of the time. In one case, a single firm in Poland created 40,000 fake accounts to be deployed for propaganda purposes.
  • Filter bubbles are real, and difficult to burst. Pew says that political polarization in the United States began more than 20 years ago. But Facebook’s design can accelerate that polarization.
  • Governments are using Facebook to target and harass their own citizens, sometimes resulting in real-world violence. In Cambodia, authorities have arrested opposition party leaders based on false stories — and also arrested citizens who spoke out against Prime Minister Hun Sen.
  • Social media can distort policymakers’ view of public opinion, in part because minority viewpoints are underrepresentedWomen are underrepresented in political discussion on Facebook, for example.

Of course, Facebook highlights the company’s positive contributions to democracy. It does expose some people to journalism who might not otherwise see it, and encourages them to discuss it. It registers voters and created a tool to let Americans explore their local ballots.

But compared to the negative effects that Facebook now admits to, these contributions can look small. Meanwhile, in a near-weekly series of blog posts, Facebook builds the case against itself. Most people will continue using it as normal. But increasingly, they have reason to wonder: should we?

My emphasis.