How Facebook Has Flattened Human Communication

David Auerbach looks at how classification of content affects social networks and observes three characteristics:

  • In any computational context, explicitly structured data floats to the top.
  • For any data set, the classification is more important than what’s being classified.
  • Simpler classifications will tend to defeat more elaborate classifications.

 

 

410 Gone

Ian Betteridge:

Why should I make an investment both in time and emotion in a service that actually cares so little about its users — and, in fact, about the health of the society it now influences? The excuse that Twitter holds up a mirror to wider society is hogwash: it has consistently and with an outstanding level of ill-judgement given a platform to and cultivated people with utterly reprehensible views.

If you’re an out and out vile individual, like Alex Jones, Twitter gives you a free pass. If you’re a conspiracy theorist who wants to get traction for your lies, Twitter is your friend. If you’re a racist, Twitter will defend your “free speech rights”.

But if you’re a woman getting vile, violent and consistent abuse, Twitter will do precisely nothing to stop it.

Without Twitter, the insanity that is QAnon couldn’t have gained the traction it has. Confined to 4chan, it would have been yet another crackpot piece of tomfoolery. Amplified unchallenged by Twitter, it becomes a series of signs held up at Trump’s rallies, and a truck parked across a highway. It won’t be too long before it becomes a death.

In the end, I decided that Twitter doesn’t deserve my attention. I couldn’t, in good faith, support a service which cares so little about the culture around it, that does nothing to be a positive influence on society, which which sees the rights of little lost boys to abuse women as more important than the rights of women not to be abused.

410 error means “gone.” In Google’s terms, “the server returns this response when the requested resource has been permanently removed. It is similar to a 404 (Not found) code, but is sometimes used in the place of a 404 for resources that used to exist but no longer do.”

Facebook’s failure in Myanmar is the work of a blundering toddler

The Guardian:

Human rights groups and researchers have been warning Facebook that its platform was being used to spread misinformation and promote hatred of Muslims, particularly the Rohingya, since 2013. As its user base exploded to 18 million, so too did hate speech, but the company was slow to react and earlier this year found its platform accused by a UN investigator of fuelling anti-Muslim violence.

The Australian journalist and researcher Aela Callan warned Facebook about the spread of anti-Rohingya posts on the platform in November 2013. She met with the company’s most senior communications and policy executive, Elliott Schrage. He referred her to staff at Internet.org, the company’s effort to connect the developing world, and a couple of Facebook employees who dealt with civil society groups. “He didn’t connect me to anyone inside Facebook who could deal with the actual problem,” she told Reuters.

In mid-2014, after false rumours online about a Muslim man raping a Buddhist woman triggered deadly riots in the city of Mandalay, the Myanmar government requested a crisis meeting with Facebook. Facebook said that government representatives should send an email when they saw examples of dangerous false news and the company would review them.

It took until April this year – four years later – for Mark Zuckerberg to tell Congress that Facebook would step up its efforts to block hate messages in Myanmar, saying “we need to ramp up our effort there dramatically”.

Since then it has deleted some known hate figures from the platform, but this week’s Reuters investigation – which found more than 1,000 posts, images and videos attacking Myanmar’s Muslims – shows there’s a long way to go.

A key issue that civil society groups focus on is Facebook’s lack of Burmese-speaking content moderators. In early 2015, there were just two of them.

Up until Wednesday of this week, Facebook has refused to reveal how many Burmese content reviewers it had hired since.

 

Why Wikipedia Works

NYMag:

Wikipedia articles also have stringent requirements for what information can be included. The three main tenets are that (1) information on the site be presented in a neutral point of view, (2) be verified by an outside source, and (3) not be based on original research. Each of these can be quibbled with (what does “neutral” mean?), and plenty of questionable statements slip through — but, luckily, you probably know that they’re questionable because of the infamous “[citation needed]” superscript that peppers the website.

Actual misinformation, meanwhile, is dealt with directly. Consider how the editors treat conspiracy theories. “Fringe theories may be mentioned, but only with the weight accorded to them in the reliable sources being cited,” Wikimedia tweeted in an explanatory thread earlier this week. In contrast, platform companies have spent much of the last year talking about maintaining their role as a platform for “all viewpoints,” and through design and presentation, they flatten everything users post to carry the same weight. A documentary on YouTube is presented in the exact same manner as an Infowars video, and until now, YouTube has felt no responsibility to draw distinctions.

But really, I’d argue that Wikipedia’s biggest asset is its willingness as a community and website to “delete.” It’s that simple. If there’s bad information, or info that’s just useless, Wikipedia’s regulatory system has the ability to discard it.

Deleting data is antithetical to data-reliant companies like Facebook and Google (which owns YouTube). This is because they are heavily invested in machine learning, which requires almost incomprehensibly large data sets on which to train programs so that they can eventually operate autonomously. The more pictures of cats there are online, the easier it is to train a computer to recognize a cat. For Facebook and Google, the idea of deleting data is sacrilege. Their solutions to fake news and misinformation has been to throw more data at the problem: third-party fact-checkers and “disputed” flags giving equal weight to every side of a debate that really only has one.

Twitter is wrong: facts are not enough to combat Alex Jones

The Verge:

The next day on the Hannity show, Dorsey elaborated. “We do believe in the power of free expression, but we always need to balance that with the fact that bad-faith actors intentionally try to silence other voices.”

As has often been the case with Twitter’s haphazard enforcement, it is difficult to reconcile this with the fact that Jones is a bad-faith actor by his own admission — or at least the admission of his lawyer during a custody battle. “He’s playing a character,” Jones’ attorney Randall Wilhite told the judge during a pretrial hearing, claiming that Jones should be held no more accountable for his actions than Jack Nicholson would for playing the Joker in a Batman movie. Rather than a fiery iconoclast telling controversial truths, he’s simply “a performance artist.” What, if anything, does it mean to say that you are concerned about bad actors when you also vociferously defend providing a megaphone to perhaps the most extreme bad-faith commentator in political discourse?

Twitter, which once identified itself as “the free speech wing of the free speech party,” has long listed toward the sort of free speech absolutism that says absolutely anything goes, so long as it isn’t overtly criminal. It’s a popular idea among the Silicon Valley cyberlibertarians who hold some the most powerful positions at tech companies and, not coincidentally, a founding principle of the internet itself.

There lies, within this absolutism, an often very idealistic and sincere belief: if we simply allow all speech to compete in the free marketplace of ideas, then the best, most productive, and most truthful ideas will win out. Sunlight is the best disinfectant, and the best answer to bad, shitty, and sometimes even abusive speech is simply more speech.

Dorsey echoed this belief in his thread defending Jones as a legitimate and not-at-all-in-violation-of-Twitter-rules user: “Accounts like Jones’ can often sensationalize issues and spread unsubstantiated rumors, so it’s critical journalists document, validate, and refute such information directly so people can form their own opinions. This is what serves the public conversation best.”

My emphasis.

The article has a good roundup of studies showing that the truth is no match for a well-judged lie.

Authoritarians used to be scared of social media, now they rule it

Boing Boing:

new report from the Institute For the Future on “state-sponsored trolling” documents the rise and rise of government-backed troll armies who terrorize journalists and opposition figures with seemingly endless waves of individuals who bombard their targets with vile vitriol, from racial slurs to rape threats.

The report traces the origin of the phenomenon to a series of high-profile social media opposition bids that challenged the world’s most restrictive regimes, from Gezi Park in Turkey to the Arab Spring.

After the initial rebellions were put down, authoritarians studied and adapted the tactics that made them so effective, taking a leaf out of US intelligence agencies’ playbook by buying or developing tools that would allow paid trolls to impersonate enormous crowds of cheering, loyal cyber-warriors.

After being blindsided by social media, the authoritarians found it easy to master it: think of Cambodia, where a bid to challenge the might of the ruling party begat a Facebook-first strategy to suppress dissent, in which government authorities arrest and torture anyone who challenges them using their real name, and then gets Facebook to disconnect anyone who uses a pseudonym to avoid retaliation.

The rise of authoritarian troll armies has been documented before. Google’s Jigsaw division produced a detailed report on the phenomenon, but decided not to publish it. Bloomberg, who have produced the excellent investigative A Global Guide to State-Sponsored Trolling supplement to the IFTF report that draws on a leaked copy of the Google research, implies that something nefarious happened to convince Google to suppress its research.

Billionaires Behaving Badly

Scott Galloway again in fine form:

They [Mr. Zuckerberg or Ms. Sandberg] wrap themselves in First-Amendment or “we want to give voice to the unheard” blankets, yet there’s nothing in either of their backgrounds that hints at a passion for First Amendment rights. Zuck’s robotic repetition that their “mission is to connect the world” at the congressional hearings in May was an appeal to pathos that flies in the face of abundant research of the human propensity towards division, tribalism, and violence. Getting a Facebook account doesn’t magically melt users’ hatred for groups they are convinced are inferior. Instead, outrage spreads faster than love. And that’s the goal: more clicks. Reactions equal engagement — a model Facebook could change.

As it is, Facebook doesn’t remove fake news (they call them “false news”). Fake news still spreads, just to fewer people. (No one knows what “fewer people” means, in percentages, ratios, or numbers.) So Infowars claimed to its 900,000 followers that Dems were going to start a second Civil War on the 4th of July. That wasn’t seen as inciting violence, and it wasn’t removed — neither for inciting violence nor for being “false news.” Unclear if it was “shown to fewer people.” Neither, apparently, was Pizzagate seen as inciting violence, though it involved real-live violence. Facebook has turned to their playbook of delay and obfuscation, and refuses to cool the echo chambers of misinformation, as it profits from them.

It would be nice to believe that the third-wealthiest person in the world and an executive who’s written eloquently about work-life balance for women and personal loss would have more concern for the commonwealth, and society writ large. But they have not demonstrated this. Their defensiveness is dangerous blather crafted by the army of PR execs at Facebook. To be fair, they’re no better or worse than tobacco executives claiming, “Tobacco is not addictive” or search execs professing, “Information wants to be free.” They have all lied to buttress their wealth, full stop. This is an externality of capitalism that’s usually addressed with regulation. Usually.

This will continue to happen unless we, American citizens, elect people who have the domain expertise and stones to hold big tech to the same scrutiny we apply to other firms. We don’t even need new regulation to break them up, but just to enforce the current regulations on media firms.

Which. They. Are.