This morning we got a chance to talk briefly with Walt Ehmer, CEO of Waffle House. For our global readers unfamiliar with the American South, the Waffle House is a diner chain that functions like a medieval tavern: it sits at every crossroads and it is always open. The Waffle House is where travelers meet each other. It provides refuge for the tipsy, the joyful, the insomniacs. If you grew up below the Mason Dixon line, something happened to you once at the Waffle House.
The definition of a Waffle House is this: when you need one, it’s nearby and serving. Which is why the chain has to be so good at staying open after a hurricane. Here’s how Ehmer puts it:
I tell people all the time, I said, we’re really not that smart, we’re not that complicated. We just have a lot of want to. We want to be there for the community. We want to be there for our people. We want to be there for the first responders.
The way the Waffle House stays open is, in fact, really smart and really complicated. A 2010 case study for the International Journal of Production Economics walks through what “a lot of want to” looks like.
David Auerbach looks at how classification of content affects social networks and observes three characteristics:
- In any computational context, explicitly structured data floats to the top.
- For any data set, the classification is more important than what’s being classified.
- Simpler classifications will tend to defeat more elaborate classifications.
Why should I make an investment both in time and emotion in a service that actually cares so little about its users — and, in fact, about the health of the society it now influences? The excuse that Twitter holds up a mirror to wider society is hogwash: it has consistently and with an outstanding level of ill-judgement given a platform to and cultivated people with utterly reprehensible views.
If you’re an out and out vile individual, like Alex Jones, Twitter gives you a free pass. If you’re a conspiracy theorist who wants to get traction for your lies, Twitter is your friend. If you’re a racist, Twitter will defend your “free speech rights”.
But if you’re a woman getting vile, violent and consistent abuse, Twitter will do precisely nothing to stop it.
Without Twitter, the insanity that is QAnon couldn’t have gained the traction it has. Confined to 4chan, it would have been yet another crackpot piece of tomfoolery. Amplified unchallenged by Twitter, it becomes a series of signs held up at Trump’s rallies, and a truck parked across a highway. It won’t be too long before it becomes a death.
In the end, I decided that Twitter doesn’t deserve my attention. I couldn’t, in good faith, support a service which cares so little about the culture around it, that does nothing to be a positive influence on society, which which sees the rights of little lost boys to abuse women as more important than the rights of women not to be abused.
A 410 error means “gone.” In Google’s terms, “the server returns this response when the requested resource has been permanently removed. It is similar to a 404 (Not found) code, but is sometimes used in the place of a 404 for resources that used to exist but no longer do.”
Human rights groups and researchers have been warning Facebook that its platform was being used to spread misinformation and promote hatred of Muslims, particularly the Rohingya, since 2013. As its user base exploded to 18 million, so too did hate speech, but the company was slow to react and earlier this year found its platform accused by a UN investigator of fuelling anti-Muslim violence.
The Australian journalist and researcher Aela Callan warned Facebook about the spread of anti-Rohingya posts on the platform in November 2013. She met with the company’s most senior communications and policy executive, Elliott Schrage. He referred her to staff at Internet.org, the company’s effort to connect the developing world, and a couple of Facebook employees who dealt with civil society groups. “He didn’t connect me to anyone inside Facebook who could deal with the actual problem,” she told Reuters.
In mid-2014, after false rumours online about a Muslim man raping a Buddhist woman triggered deadly riots in the city of Mandalay, the Myanmar government requested a crisis meeting with Facebook. Facebook said that government representatives should send an email when they saw examples of dangerous false news and the company would review them.
It took until April this year – four years later – for Mark Zuckerberg to tell Congress that Facebook would step up its efforts to block hate messages in Myanmar, saying “we need to ramp up our effort there dramatically”.
Since then it has deleted some known hate figures from the platform, but this week’s Reuters investigation – which found more than 1,000 posts, images and videos attacking Myanmar’s Muslims – shows there’s a long way to go.
A key issue that civil society groups focus on is Facebook’s lack of Burmese-speaking content moderators. In early 2015, there were just two of them.
Up until Wednesday of this week, Facebook has refused to reveal how many Burmese content reviewers it had hired since.
Wikipedia articles also have stringent requirements for what information can be included. The three main tenets are that (1) information on the site be presented in a neutral point of view, (2) be verified by an outside source, and (3) not be based on original research. Each of these can be quibbled with (what does “neutral” mean?), and plenty of questionable statements slip through — but, luckily, you probably know that they’re questionable because of the infamous “” superscript that peppers the website.
Actual misinformation, meanwhile, is dealt with directly. Consider how the editors treat conspiracy theories. “Fringe theories may be mentioned, but only with the weight accorded to them in the reliable sources being cited,” Wikimedia tweeted in an explanatory thread earlier this week. In contrast, platform companies have spent much of the last year talking about maintaining their role as a platform for “all viewpoints,” and through design and presentation, they flatten everything users post to carry the same weight. A documentary on YouTube is presented in the exact same manner as an Infowars video, and until now, YouTube has felt no responsibility to draw distinctions.
But really, I’d argue that Wikipedia’s biggest asset is its willingness as a community and website to “delete.” It’s that simple. If there’s bad information, or info that’s just useless, Wikipedia’s regulatory system has the ability to discard it.
Deleting data is antithetical to data-reliant companies like Facebook and Google (which owns YouTube). This is because they are heavily invested in machine learning, which requires almost incomprehensibly large data sets on which to train programs so that they can eventually operate autonomously. The more pictures of cats there are online, the easier it is to train a computer to recognize a cat. For Facebook and Google, the idea of deleting data is sacrilege. Their solutions to fake news and misinformation has been to throw more data at the problem: third-party fact-checkers and “disputed” flags giving equal weight to every side of a debate that really only has one.
Vulture interviews Penn (the one who talks). Interesting guy.
The next day on the Hannity show, Dorsey elaborated. “We do believe in the power of free expression, but we always need to balance that with the fact that bad-faith actors intentionally try to silence other voices.”
As has often been the case with Twitter’s haphazard enforcement, it is difficult to reconcile this with the fact that Jones is a bad-faith actor by his own admission — or at least the admission of his lawyer during a custody battle. “He’s playing a character,” Jones’ attorney Randall Wilhite told the judge during a pretrial hearing, claiming that Jones should be held no more accountable for his actions than Jack Nicholson would for playing the Joker in a Batman movie. Rather than a fiery iconoclast telling controversial truths, he’s simply “a performance artist.” What, if anything, does it mean to say that you are concerned about bad actors when you also vociferously defend providing a megaphone to perhaps the most extreme bad-faith commentator in political discourse?
Twitter, which once identified itself as “the free speech wing of the free speech party,” has long listed toward the sort of free speech absolutism that says absolutely anything goes, so long as it isn’t overtly criminal. It’s a popular idea among the Silicon Valley cyberlibertarians who hold some the most powerful positions at tech companies and, not coincidentally, a founding principle of the internet itself.
There lies, within this absolutism, an often very idealistic and sincere belief: if we simply allow all speech to compete in the free marketplace of ideas, then the best, most productive, and most truthful ideas will win out. Sunlight is the best disinfectant, and the best answer to bad, shitty, and sometimes even abusive speech is simply more speech.
Dorsey echoed this belief in his thread defending Jones as a legitimate and not-at-all-in-violation-of-Twitter-rules user: “Accounts like Jones’ can often sensationalize issues and spread unsubstantiated rumors, so it’s critical journalists document, validate, and refute such information directly so people can form their own opinions. This is what serves the public conversation best.”
The article has a good roundup of studies showing that the truth is no match for a well-judged lie.
A new report from the Institute For the Future on “state-sponsored trolling” documents the rise and rise of government-backed troll armies who terrorize journalists and opposition figures with seemingly endless waves of individuals who bombard their targets with vile vitriol, from racial slurs to rape threats.
The report traces the origin of the phenomenon to a series of high-profile social media opposition bids that challenged the world’s most restrictive regimes, from Gezi Park in Turkey to the Arab Spring.
After the initial rebellions were put down, authoritarians studied and adapted the tactics that made them so effective, taking a leaf out of US intelligence agencies’ playbook by buying or developing tools that would allow paid trolls to impersonate enormous crowds of cheering, loyal cyber-warriors.
After being blindsided by social media, the authoritarians found it easy to master it: think of Cambodia, where a bid to challenge the might of the ruling party begat a Facebook-first strategy to suppress dissent, in which government authorities arrest and torture anyone who challenges them using their real name, and then gets Facebook to disconnect anyone who uses a pseudonym to avoid retaliation.
The rise of authoritarian troll armies has been documented before. Google’s Jigsaw division produced a detailed report on the phenomenon, but decided not to publish it. Bloomberg, who have produced the excellent investigative A Global Guide to State-Sponsored Trolling supplement to the IFTF report that draws on a leaked copy of the Google research, implies that something nefarious happened to convince Google to suppress its research.