Why Wikipedia Works

NYMag:

Wikipedia articles also have stringent requirements for what information can be included. The three main tenets are that (1) information on the site be presented in a neutral point of view, (2) be verified by an outside source, and (3) not be based on original research. Each of these can be quibbled with (what does “neutral” mean?), and plenty of questionable statements slip through — but, luckily, you probably know that they’re questionable because of the infamous “[citation needed]” superscript that peppers the website.

Actual misinformation, meanwhile, is dealt with directly. Consider how the editors treat conspiracy theories. “Fringe theories may be mentioned, but only with the weight accorded to them in the reliable sources being cited,” Wikimedia tweeted in an explanatory thread earlier this week. In contrast, platform companies have spent much of the last year talking about maintaining their role as a platform for “all viewpoints,” and through design and presentation, they flatten everything users post to carry the same weight. A documentary on YouTube is presented in the exact same manner as an Infowars video, and until now, YouTube has felt no responsibility to draw distinctions.

But really, I’d argue that Wikipedia’s biggest asset is its willingness as a community and website to “delete.” It’s that simple. If there’s bad information, or info that’s just useless, Wikipedia’s regulatory system has the ability to discard it.

Deleting data is antithetical to data-reliant companies like Facebook and Google (which owns YouTube). This is because they are heavily invested in machine learning, which requires almost incomprehensibly large data sets on which to train programs so that they can eventually operate autonomously. The more pictures of cats there are online, the easier it is to train a computer to recognize a cat. For Facebook and Google, the idea of deleting data is sacrilege. Their solutions to fake news and misinformation has been to throw more data at the problem: third-party fact-checkers and “disputed” flags giving equal weight to every side of a debate that really only has one.

Twitter is wrong: facts are not enough to combat Alex Jones

The Verge:

The next day on the Hannity show, Dorsey elaborated. “We do believe in the power of free expression, but we always need to balance that with the fact that bad-faith actors intentionally try to silence other voices.”

As has often been the case with Twitter’s haphazard enforcement, it is difficult to reconcile this with the fact that Jones is a bad-faith actor by his own admission — or at least the admission of his lawyer during a custody battle. “He’s playing a character,” Jones’ attorney Randall Wilhite told the judge during a pretrial hearing, claiming that Jones should be held no more accountable for his actions than Jack Nicholson would for playing the Joker in a Batman movie. Rather than a fiery iconoclast telling controversial truths, he’s simply “a performance artist.” What, if anything, does it mean to say that you are concerned about bad actors when you also vociferously defend providing a megaphone to perhaps the most extreme bad-faith commentator in political discourse?

Twitter, which once identified itself as “the free speech wing of the free speech party,” has long listed toward the sort of free speech absolutism that says absolutely anything goes, so long as it isn’t overtly criminal. It’s a popular idea among the Silicon Valley cyberlibertarians who hold some the most powerful positions at tech companies and, not coincidentally, a founding principle of the internet itself.

There lies, within this absolutism, an often very idealistic and sincere belief: if we simply allow all speech to compete in the free marketplace of ideas, then the best, most productive, and most truthful ideas will win out. Sunlight is the best disinfectant, and the best answer to bad, shitty, and sometimes even abusive speech is simply more speech.

Dorsey echoed this belief in his thread defending Jones as a legitimate and not-at-all-in-violation-of-Twitter-rules user: “Accounts like Jones’ can often sensationalize issues and spread unsubstantiated rumors, so it’s critical journalists document, validate, and refute such information directly so people can form their own opinions. This is what serves the public conversation best.”

My emphasis.

The article has a good roundup of studies showing that the truth is no match for a well-judged lie.

Authoritarians used to be scared of social media, now they rule it

Boing Boing:

new report from the Institute For the Future on “state-sponsored trolling” documents the rise and rise of government-backed troll armies who terrorize journalists and opposition figures with seemingly endless waves of individuals who bombard their targets with vile vitriol, from racial slurs to rape threats.

The report traces the origin of the phenomenon to a series of high-profile social media opposition bids that challenged the world’s most restrictive regimes, from Gezi Park in Turkey to the Arab Spring.

After the initial rebellions were put down, authoritarians studied and adapted the tactics that made them so effective, taking a leaf out of US intelligence agencies’ playbook by buying or developing tools that would allow paid trolls to impersonate enormous crowds of cheering, loyal cyber-warriors.

After being blindsided by social media, the authoritarians found it easy to master it: think of Cambodia, where a bid to challenge the might of the ruling party begat a Facebook-first strategy to suppress dissent, in which government authorities arrest and torture anyone who challenges them using their real name, and then gets Facebook to disconnect anyone who uses a pseudonym to avoid retaliation.

The rise of authoritarian troll armies has been documented before. Google’s Jigsaw division produced a detailed report on the phenomenon, but decided not to publish it. Bloomberg, who have produced the excellent investigative A Global Guide to State-Sponsored Trolling supplement to the IFTF report that draws on a leaked copy of the Google research, implies that something nefarious happened to convince Google to suppress its research.

Billionaires Behaving Badly

Scott Galloway again in fine form:

They [Mr. Zuckerberg or Ms. Sandberg] wrap themselves in First-Amendment or “we want to give voice to the unheard” blankets, yet there’s nothing in either of their backgrounds that hints at a passion for First Amendment rights. Zuck’s robotic repetition that their “mission is to connect the world” at the congressional hearings in May was an appeal to pathos that flies in the face of abundant research of the human propensity towards division, tribalism, and violence. Getting a Facebook account doesn’t magically melt users’ hatred for groups they are convinced are inferior. Instead, outrage spreads faster than love. And that’s the goal: more clicks. Reactions equal engagement — a model Facebook could change.

As it is, Facebook doesn’t remove fake news (they call them “false news”). Fake news still spreads, just to fewer people. (No one knows what “fewer people” means, in percentages, ratios, or numbers.) So Infowars claimed to its 900,000 followers that Dems were going to start a second Civil War on the 4th of July. That wasn’t seen as inciting violence, and it wasn’t removed — neither for inciting violence nor for being “false news.” Unclear if it was “shown to fewer people.” Neither, apparently, was Pizzagate seen as inciting violence, though it involved real-live violence. Facebook has turned to their playbook of delay and obfuscation, and refuses to cool the echo chambers of misinformation, as it profits from them.

It would be nice to believe that the third-wealthiest person in the world and an executive who’s written eloquently about work-life balance for women and personal loss would have more concern for the commonwealth, and society writ large. But they have not demonstrated this. Their defensiveness is dangerous blather crafted by the army of PR execs at Facebook. To be fair, they’re no better or worse than tobacco executives claiming, “Tobacco is not addictive” or search execs professing, “Information wants to be free.” They have all lied to buttress their wealth, full stop. This is an externality of capitalism that’s usually addressed with regulation. Usually.

This will continue to happen unless we, American citizens, elect people who have the domain expertise and stones to hold big tech to the same scrutiny we apply to other firms. We don’t even need new regulation to break them up, but just to enforce the current regulations on media firms.

Which. They. Are.

Zuckerberg defends Facebook users’ right to be wrong – even Holocaust deniers

The Guardian:

Mark Zuckerberg defended the rights of Facebook users to publish Holocaust denial posts, saying he didn’t “think that they’re intentionally getting it wrong”.

In an interview with Recode published on Wednesday, the CEO also explained Facebook’s decision to allow the far-right conspiracy theory website Infowars to continue using the platform, saying the social network would try to “reduce the distribution of that content”, but would not censor the page.

Touching that he can think Holocaust deniers aren’t getting it intentionally wrong.

Platforms like Facebook and YouTube have faced intense scrutiny for allowing the far-right commentator Alex Jones to continue to host his Infowars site, which most infamously has spread the false claim that the Sandy Hook mass shooting that killed 20 schoolchildren was a hoax.

That content, Zuckerberg said, would be removed if it was abusive towards an individual: “Going to someone who is a victim of Sandy Hook and telling them, ‘Hey, no, you’re a liar’ – that is harassment, and we actually will take that down.”

I don’t pretend to understand the difference between telling a Holocaust victim “you’re a liar” and saying the same thing to a Sandy Hook victim.

In the interview, the CEO noted that the Guardian “initially” alerted Facebook to the work of Aleksandr Kogan, the academic researcher who harvested the data, saying: “And when we learned about that, we immediately shut down the app, took away his profile, and demanded certification that the data was deleted.”

Facebook, however, did not suspend Kogan and the associated company until March of 2018, despite the Guardian’s reporting several years prior. A spokesperson later said that Zuckerberg had misspoken when he claimed the company “immediately … took away his profile”, admitting that this removal had not happened until this year.

Slippery as ever.

Later:  How One of the Internet’s Biggest History Forums Deals With Holocaust Deniers.  I’m puzzled why Facebook ignore how the most experienced moderators handle Holocaust denial.

Also: A Sandy Hook family to Mark Zuckerberg: why let Facebook lies hurt us even more? suggests that Facebook merely pretends to treat Sandy Hook survivors better; in practise it ignores them.

 

Top Voting Machine Vendor Admits It Installed Remote-Access Software on Systems Sold to States

Motherboard:

The nation’s top voting machine maker has admitted in a letter to a federal lawmaker that the company installed remote-access software on election-management systems it sold over a period of six years, raising questions about the security of those systems and the integrity of elections that were conducted with them.

Previously I linked to a video explaining how flawed voting machines are and that is without installing remote-access software.  If only they are regulated in the same way as slot machines.

The fallacy of obviousness

Aeon:

So if the gorilla experiment doesn’t illustrate that humans are blind to the obvious, then what exactly does it illustrate? What’s an alternative interpretation, and what does it tell us about perception, cognition and the human mind?

The alternative interpretation says that what people are looking for – rather than what people are merely looking at – determines what is obvious. Obviousness is not self-evident. Or as Sherlock Holmes said: ‘There is nothing more deceptive than an obvious fact.’ This isn’t an argument against facts or for ‘alternative facts’, or anything of the sort. It’s an argument about what qualifies as obvious, why and how. See, obviousness depends on what is deemed to be relevant for a particular question or task at hand. Rather than passively accounting for or recording everything directly in front of us, humans – and other organisms for that matter – instead actively look for things. The implication (contrary to psychophysics) is that mind-to-world processes drive perception rather than world-to-mind processes. The gorilla experiment itself can be reinterpreted to support this view of perception, showing that what we see depends on our expectations and questions – what we are looking for, what question we are trying to answer.

At first glance that might seem like a rather mundane interpretation, particularly when compared with the startling claim that humans are ‘blind to the obvious’. But it’s more radical than it might seem. This interpretation of the gorilla experiment puts humans centre-stage in perception, rather than relegating them to passively recording their surroundings and environments. It says that what we see is not so much a function of what is directly in front of us (Kahneman’s natural assessments), or what one is in camera-like fashion recording or passively looking at, but rather determined by what we have in our minds, for example, by the questions we have in mind. People miss the gorilla not because they are blind, but because they were prompted – in this case, by the scientists themselves – to pay attention to something else. The question – ‘How many basketball passes’ (just like any question: ‘Where are my keys?’) – primes us to see certain aspects of a visual scene, at the expense of any number of other things.

The biologist Jakob von Uexküll (1864-1944) argued that all species, humans included, have a unique ‘Suchbild’ – German for a seek- or search-image – of what they are looking for. In the case of humans, this search-image includes the questions, expectations, problems, hunches or theories that we have in mind, which in turn structure and direct our awareness and attention. The important point is that humans do not observe scenes passively or neutrally. In 1966, the philosopher Karl Popper conducted an informal experiment to make this point. During a lecture at the University of Oxford, he turned to his audience and said: ‘My experiment consists of asking you to observe, here and now. I hope you are all cooperating and observing! However, I feel that at least some of you, instead of observing, will feel a strong urge to ask: “What do you want me to observe?”’ Then Popper delivered his insight about observation: ‘For what I am trying to illustrate is that, in order to observe, we must have in mind a definite question, which we might be able to decide by observation.’

In other words, there is no neutral observation. The world doesn’t tell us what is relevant. Instead, it responds to questions. When looking and observing, we are usually directed toward something, toward answering specific questions or satisfying some curiosities or problems. ‘All observation must be for or against a point of view,’ is how Charles Darwin put it in 1861. Similarly, the art historian Ernst Gombrich in 1956 emphasised the role of the ‘beholder’s share’ in observation and perception.

 

 

Facebook bug set 14 million users’ sharing settings to public

Daring Fireball:

Heather Kelly, reporting for CNN:

For a period of four days in May, about 14 million Facebook users around the world had their default sharing setting for all new posts set to public, the company revealed Thursday.

The bug, which affected those users from May 18 to May 22, occurred while Facebook was testing a new feature.

David Frum:

It’s so weird that this never happens the other way around, settings accidentally changed so that Facebook users inadvertently get more privacy than they signed up for.

Yeah, so weird. What are the odds?

Yes, extraordinary how every Facebook bug is in their favour.