It’s the (Democracy-Poisoning) Golden Age of Free Speech

Via Charles Arthur, Zeynep Tufekci in Wired:

The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself. As a result, they don’t look much like the old forms of censorship at all. They look like viral or coordinated harassment campaigns, which harness the dynamics of viral outrage to impose an unbearable and disproportionate cost on the act of speaking out. They look like epidemics of disinformation, meant to undercut the credibility of valid information sources. They look like bot-fueled campaigns of trolling and distraction, or piecemeal leaks of hacked materials, meant to swamp the attention of traditional media.

These tactics usually don’t break any laws or set off any First Amendment alarm bells. But they all serve the same purpose that the old forms of censorship did: They are the best available tools to stop ideas from spreading and gaining purchase. They can also make the big platforms a terrible place to interact with other people.

Even when the big platforms themselves suspend or boot someone off their networks for violating “community standards”—an act that does look to many people like old-fashioned censorship—it’s not technically an infringement on free speech, even if it is a display of immense platform power. Anyone in the world can still read what the far-right troll Tim “Baked Alaska” Gionet has to say on the internet. What Twitter has denied him, by kicking him off, is attention.

Many more of the most noble old ideas about free speech simply don’t compute in the age of social media. John Stuart Mill’s notion that a “marketplace of ideas” will elevate the truth is flatly belied by the virality of fake news. And the famous American saying that “the best cure for bad speech is more speech”—a paraphrase of Supreme Court justice Louis Brandeis—loses all its meaning when speech is at once mass but also nonpublic. How do you respond to what you cannot see? How can you cure the effects of “bad” speech with more speech when you have no means to target the same audience that received the original message?


Facebook is done with quality journalism. Deal with it.

Frederic Filloux in Monday Note:

Facebook killed the news media three times.

First, it killed the notion of brand. Year after year, the percentage of people able to recall where they got their news, is dwindling. “I read it on Facebook” now applies to half the population of the United States and Europe, and much more in countries where Facebook embodies the Internet.

Second, the notion of authorship has also vanished. Almost nobody has a clue who wrote what. Gradually, the two pillars of the trusting relationship between the media and its customers eroded, before crumbling altogether. Facebook has flattened the news for good.

Third, Facebook annihilated the business model of news by opening the way to a massive, ultra-cheap and ultra-targeted advertising system that brings next to nothing to the publishers. The reality of Facebook’s revenue stream is harsh: a European publisher told me last week that its RPM (Revenue per thousands) for videos on Facebook was about 30 cents of a Euro (that is 37 cents on a dollar). A pittance.

This is the damage done by “move fast and break things”.

On the plus side, one hopes the media will now stop the feeling of entitlement and adapt to the new world.  It would be so much better if the media was responsible for its own distribution.

Tea if by sea, cha if by land: Why the world only has two words for tea


The term cha (茶) is “Sinitic,” meaning it is common to many varieties of Chinese. It began in China and made its way through central Asia, eventually becoming “chay” (چای) in Persian. That is no doubt due to the trade routes of the Silk Road, along which, according to a recent discovery, tea was traded over 2,000 years ago. This form spread beyond Persia, becoming chay in Urdu, shay in Arabic, and chay in Russian, among others. It even it made its way to sub-Saharan Africa, where it became chai in Swahili. The Japanese and Korean terms for tea are also based on the Chinese cha, though those languages likely adopted the word even before its westward spread into Persian.

But that doesn’t account for “tea.” The Chinese character for tea, 茶, is pronounced differently by different varieties of Chinese, though it is written the same in them all. In today’s Mandarin, it is chá. But in the Min Nan variety of Chinese, spoken in the coastal province of Fujian, the character is pronounced te. The key word here is “coastal.”

The te form used in coastal-Chinese languages spread to Europe via the Dutch, who became the primary traders of tea between Europe and Asia in the 17th century, as explained in the World Atlas of Language Structures. The main Dutch ports in east Asia were in Fujian and Taiwan, both places where people used the te pronunciation. The Dutch East India Company’s expansive tea importation into Europe gave us the French thé, the German tee, and the English tea.

There’s a great map to demonstrate it visually.


How to Fix Facebook—Before It Fixes Us

Roger McNamee, an early Facebook investor and mentor to Mark Zuckerberg, in Washington Monthly:

Whenever you log into Facebook, there are millions of posts the platform could show you. The key to its business model is the use of algorithms, driven by individual user data, to show you stuff you’re more likely to react to. Wikipedia defines an algorithm as “a set of rules that precisely defines a sequence of operations.” Algorithms appear value neutral, but the platforms’ algorithms are actually designed with a specific value in mind: maximum share of attention, which optimizes profits. They do this by sucking up and analyzing your data, using it to predict what will cause you to react most strongly, and then giving you more of that.

Algorithms that maximize attention give an advantage to negative messages. People tend to react more to inputs that land low on the brainstem. Fear and anger produce a lot more engagement and sharing than joy. The result is that the algorithms favor sensational content over substance. Of course, this has always been true for media; hence the old news adage “If it bleeds, it leads.” But for mass media, this was constrained by one-size-fits-all content and by the limitations of delivery platforms. Not so for internet platforms on smartphones. They have created billions of individual channels, each of which can be pushed further into negativity and extremism without the risk of alienating other audience members. To the contrary: the platforms help people self-segregate into like-minded filter bubbles, reducing the risk of exposure to challenging ideas.

It took Brexit for me to begin to see the danger of this dynamic. I’m no expert on British politics, but it seemed likely that Facebook might have had a big impact on the vote because one side’s message was perfect for the algorithms and the other’s wasn’t. The “Leave” campaign made an absurd promise—there would be savings from leaving the European Union that would fund a big improvement in the National Health System—while also exploiting xenophobia by casting Brexit as the best way to protect English culture and jobs from immigrants. It was too-good-to-be-true nonsense mixed with fearmongering.

Meanwhile, the Remain campaign was making an appeal to reason. Leave’s crude, emotional message would have been turbocharged by sharing far more than Remain’s. I did not see it at the time, but the users most likely to respond to Leave’s messages were probably less wealthy and therefore cheaper for the advertiser to target: the price of Facebook (and Google) ads is determined by auction, and the cost of targeting more upscale consumers gets bid up higher by actual businesses trying to sell them things. As a consequence, Facebook was a much cheaper and more effective platform for Leave in terms of cost per user reached. And filter bubbles would ensure that people on the Leave side would rarely have their questionable beliefs challenged. Facebook’s model may have had the power to reshape an entire continent.

Great article.

Mark Zuckerberg sets toughest new year’s goal yet: fixing Facebook

The Guardian:

Amid unceasing criticism of Facebook’s immense power and pernicious impact on society, its CEO, Mark Zuckerberg, announced Thursday that his “personal challenge” for 2018 will be “to focus on fixing these important issues”.

Zuckerberg’s new year’s resolution – a tradition for the executive who in previous years has pledged to learn Mandarin, run 365 miles, and read a book each week – is a remarkable acknowledgment of the terrible year Facebook has had.

“Facebook has a lot of work to do – whether it’s protecting our community from abuse and hate, defending against interference by nation states, or making sure that time spent on Facebook is time well spent,” Zuckerberg wrote on his Facebook page. “We won’t prevent all mistakes or abuse, but we currently make too many errors enforcing our policies and preventing misuse of our tools.”

At the beginning of 2017, as many liberals were grappling with Donald Trump’s election and the widening divisions in American society, Zuckerberg embarked on a series of trips to meet regular Americans in all 50 states. But while Zuckerberg was donning hard hats and riding tractors, an increasing number of critics both inside and outside of the tech industry were identifying Facebook as a key driver of many of society’s current ills.

The past year has seen the social media company try – and largely fail – to get a handle on the proliferation of misinformation on its platform; acknowledge that it enabled a Russian influence operation to influence the US presidential election; and concede that its products can damage users’ mental health.

By attempting to take on these complex problems as his annual personal challenge, Zuckerberg is, for the first time, setting himself a task that he is unlikely to achieve. With 2 billion users and a presence in almost every country, the company’s challenges are no longer bugs that can be addressed by engineering code.

Facebook, like other tech giants, has long maintained that it is essentially politically neutral – the company has “community standards” but no clearly articulated political orientation. While in past years, that neutrality has enabled Facebook to grow at great speed without assuming responsibility for how individuals or governments used its tools, the political tumult of recent years has made such a stance increasingly untenable.

The difficulty of Facebook’s task is illustrated in the company’s current conundrum over enforcing of US sanctions against some world leaders but not others, leaving observers to wonder what rules, if any, Facebook is actually playing by.

I know he’s come a long way since scoffing at the idea that Facebook had any problems at all but isn’t this the sort of thing a CEO is paid to do as a er regular job, without the melodrama of a “personal challenge”?

Facebook Says It Is Deleting Accounts at the Direction of the U.S. and Israeli Governments

Glenn Greenwald on The Intercept:

FACEBOOK NOW SEEMS to be explicitly admitting that it also intends to follow the censorship orders of the U.S. government. Earlier this week, the company deleted the Facebook and Instagram accounts of Ramzan Kadyrov, the repressive, brutal, and authoritarian leader of the Chechen Republic, who had a combined 4 million followers on those accounts. To put it mildly, Kadyrov — who is given free rein to rule the province in exchange for ultimate loyalty to Moscow — is the opposite of a sympathetic figure: He has been credibly accused of a wide range of horrific human rights violations, from the imprisonment and torture of LGBTs to the kidnapping and killing of dissidents.

But none of that dilutes how disturbing and dangerous Facebook’s rationale for its deletion of his accounts is. A Facebook spokesperson told the New York Times that the company deleted these accounts not because Kadyrov is a mass murderer and tyrant, but that “Mr. Kadyrov’s accounts were deactivated because he had just been added to a United States sanctions list and that the company was legally obligated to act.”

As the Times notes, this rationale appears dubious or at least inconsistently applied: Others who are on the same sanctions list, such as Venezuelan President Nicolas Maduro, remain active on both Facebook and Instagram. But just consider the incredibly menacing implications of Facebook’s claims.

What this means is obvious: that the U.S. government — meaning, at the moment, the Trump administration — has the unilateral and unchecked power to force the removal of anyone it wants from Facebook and Instagram by simply including them on a sanctions list. Does anyone think this is a good outcome? Does anyone trust the Trump administration — or any other government — to compel social media platforms to delete and block anyone it wants to be silenced?


These Are 50 Of The Biggest Fake News Hits On Facebook In 2017

Buzzfeed News:

Facebook’s major effort to stop the spread of false articles on its platform did not result in less engagement for the most viral hoaxes in 2017, according to an analysis by BuzzFeed News. In fact, the analysis found that the 50 most viral fake news stories of 2017 generated more total shares, reactions, and comments than the top 50 hoaxes of 2016.

This highlights the challenge faced by Facebook to find ways to halt or arrest the spread of completely false stories on its platform, and raises questions about how much progress has been made in fighting this type of misinformation. After a year of working with fact-checkers such as PolitiFact and Snopes, and stating that related product efforts can reduce spread of a false news story by 80% once it’s been debunked, Facebook remains the home of massively viral hoaxes.

In response to the BuzzFeed News analysis, a Facebook spokesperson said the company’s initiatives continue to improve, and that the spread of pure fake news has been reduced on its platform.


‘The Russia Story’ As it was encountered in 2017 by rabid Democrats, recalcitrant Republicans, and everyone else on social media

New York Magazine:

Throughout 2017, as social-media-addled minds chased one shiny news object after another, one story was constant and inescapable: “Russia.”

But what “Russia” meant, exactly, depended a lot on where you got your news. The “Russia story” that developed throughout the year was likely very different from the “Russia story” as it was understood by your friends, relatives, or co-workers, not just because your politics might be different than theirs but because you might have been encountering different news entirely. To some Democrats, the “Russia Story” was that Donald Trump was a wholesale asset of the Russian government. To others, the “Russia Story” was the F.B.I.’s methodical investigation. And to many Republicans, the “Russia story” was actually a story about the Democrats’ collusion with Russia, not the Trump campaign’s.

We wanted to see how those competing narratives were shaped, week by week, story by story, through 2017 — examining not just broad “liberal” and “conservative” bubbles, but also the different ways highly partisan readers and their less-partisan neighbors might have encountered the story. Would outlets directed toward zealously partisan Democrats frame the story differently than those with readers more evenly distributed along the political spectrum? What about on the other side of the fence?

To re-create approximate social-media “bubbles,” we used “partisanship scores,” developed by Harvard’s Berkman Klein Center. Rather than attempt to arbitrarily measure the slant of a given publication’s editorial line, these scores gauge the partisanship of its audience, by measuring how frequently its stories were shared by Clinton or Trump supporters during the 2016 election. For our purposes, outlets shared vastly more often by Clinton supporters than Trump supporters are categorized as “Highly Democratic”; outlets where the ratio was closer but still weighted toward Clinton were labeled “Mostly Democratic”; and so on. These reconstructed bubbles aren’t a perfect representation of how people encountered their news, but the partisanship scores allow us to focus on publications’ actual audience — the real citizens of the bubble — rather than our perceptions of their politics.

Fascinating comparative demonstration of the real-world effect of different news sources in a digital age.

How Facebook’s Political Unit Enables the Dark Art of Digital Propaganda


Under fire for Facebook Inc.’s role as a platform for political propaganda, co-founder Mark Zuckerberg has punched back, saying his mission is above partisanship. “We hope to give all people a voice and create a platform for all ideas,” Zuckerberg wrote in September after President Donald Trump accused Facebook of bias.

Zuckerberg’s social network is a politically agnostic tool for its more than 2 billion users, he has said. But Facebook, it turns out, is no bystander in global politics. What he hasn’t said is that his company actively works with political parties and leaders including those who use the platform to stifle opposition—sometimes with the aid of “troll armies” that spread misinformation and extremist ideologies.

The initiative is run by a little-known Facebook global government and politics team that’s neutral in that it works with nearly anyone seeking or securing power. The unit is led from Washington by Katie Harbath, a former Republican digital strategist who worked on former New York Mayor Rudy Giuliani’s 2008 presidential campaign. Since Facebook hired Harbath three years later, her team has traveled the globe helping political clients use the company’s powerful digital tools.

In some of the world’s biggest democracies—from India and Brazil to Germany and the U.K.—the unit’s employees have become de facto campaign workers. And once a candidate is elected, the company in some instances goes on to train government employees or provide technical assistance for live streams at official state events.

If you spend a lot of money with Facebook, of course they’ll help you spend it effectively.  I’m not sure that Facebook has really thought through the implications of what they’re doing or the scope for their role going (or doing) wrong.

The Guardian’s David Pemsel: ‘Our relationship with Facebook is difficult’

David Pemsel, CEO of Guardian Media Group, interviewed in Digiday UK:

What’s next for publishers’ relationship with Facebook and Google?
We have a close relationship with Google from [CEO] Sundar [Pichai] down. They recognize the role of quality news within their ecosystem. So we’ve collaborated a lot around video, VR funding, data analytics and engineering resources. It’s a valuable strategic relationship.

What about Facebook?
Facebook is a different picture. Our relationship with them is difficult because we’ve not found the strategic meeting point on which to collaborate. Eighteen months ago, they changed their algorithm, which showed their business model was derived on virality, not on the distribution of quality. We argue that quality, for societal reasons, as well as to derive ad revenue, should be part of their ecosystem. It’s not. We came out of Instant Articles because we didn’t want to provide our journalism in return for nothing. When you have algorithms that are fueling fake news and virality with no definition around what’s good or bad, how can the Guardian play a role within that ecosystem? The idea of what the Guardian does being starved of oxygen in those environments is not only damaging to our business model but damaging to everyone.

My emphasis.