A recent, sprawling Wired feature outlined the results of its analysis on toxicity in online commenters across the United States. Unsurprisingly, it was like catnip for everyone who’s ever heard the phrase “don’t read the comments.” According to “The Great Tech Panic: Trolls Across America,” Vermont has the most toxic online commenters, whereas Sharpsburg, Georgia, “is the least-toxic city in the US.”
There’s just one problem.
Malicious crowdsourcing forums are gaining traction as sources of spreading misinformation online, but are limited by the costs of hiring and managing human workers. In this paper, we identify a new class of attacks that leverage deep learning language models (Recurrent Neural Networks or RNNs) to automate the generation of fake online reviews for products and services. Not only are these attacks cheap and therefore more scalable, but they can control rate of content output to eliminate the signature burstiness that makes crowdsourced campaigns easy to detect.
Using Yelp reviews as an example platform, we show how a two phased review generation and customization attack can produce reviews that are indistinguishable by state-of-the-art statistical detectors. We conduct a survey-based user study to show these reviews not only evade human detection, but also score high on “usefulness” metrics by users. Finally, we develop novel automated defenses against these attacks, by leveraging the lossy transformation introduced by the RNN training and generation cycle. We consider countermeasures against our mechanisms, show that they produce unattractive cost-benefit tradeoffs for attackers, and that they can be further curtailed by simple constraints imposed by online service providers.
This could get very, very tricky. AI will force us to give up on web anonymity.
From Locus Online, an essay by Cory Doctorow looking at cheating in a world run by software.
Unlike hardware, software is smart enough to cheat depending on who’s looking (VW’s diesel cheat fooled regulators, it wasn’t for customers). Anti-consumer laws compound this. For example, the US Computer Fraud and Abuse Act (1986) makes it a crime, with jail-time, to violate a company’s terms of service. Logging into a website under a fake ID to see if it behaves differently depending on who it is talking to is thus a potential felony, provided that doing so is banned in the small-print clickthrough agreement when you sign up.
Cardiff Garcia in the Financial Times:
On Wednesday morning the New York Times reported that New America, a US think tank, had parted ways with its Open Markets team after Alphabet executive chairman Eric Schmidt complained about a press release.
Google and Schmidt are donors to New America, and Schmidt himself was chairman of the think tank until last year. The press release from Open Markets director Barry Lynn had praised the European Commission’s decision in June to fine Google for breaching EU anti-trust rules. In the aftermath of the press release, according to the Times, “word of Mr. Schmidt’s displeasure rippled through New America, which employs more than 200 people, including dozens of researchers, writers and scholars, most of whom work in sleek Washington offices where the main conference room is called the ‘Eric Schmidt Ideas Lab.’”
Given that the Open Markets program’s mandate included analysing the political power and societal influence held by potential monopolists and oligopolists, there’s an obvious irony in its having been discarded because of a company’s decision to wield this same influence.
New America CEO Anne-Marie Slaughter has responded to the New York Times report, saying that Google had not “lobbied New America to expel Open Markets because of this press release”, and that Lynn has now been fired for “his repeated refusal to adhere to New America’s standards of openness and institutional collegiality”. Slaughter doesn’t deny or address her alleged emails to Lynn reported by the Times, including one in which she wrote: “We are in the process of trying to expand our relationship with Google on some absolutely key points… just THINK about how you are imperiling funding for others.”
Like some many stories of this ilk there’s a blizzard of counter-claims. I’ve not gone through everything in detail but here’s the FT:
In any case, unless more detail emerges, it’s hard to accept the assertion from New America that the decision to split from the Open Markets group had nothing to do with the content produced by the team. Why would a problem of coordination be punished so severely, with expulsion no less? The email from Slaughter to Lynn on June 23rd — “just THINK about how you are imperiling funding for others”, she writes — is especially damning.
And The Register:
The fact is that if the financial relationship with Google and Schmidt wasn’t there, and if Slaughter wasn’t an old friend of Schmidt’s, there would not have been any concern over Lynn’s statement in the first place. It was, after all, a personal statement from a think tank: hardly draft legislation or anti-trust charges.
That Lynn felt the need to push his statement out without going through Slaughter, and the fact that she had such a strong reaction when he didn’t, combined with the virtual certainty that Schmidt called soon after to express his annoyance, is as clear an example of soft money influence as you will ever find.
Slaughter concludes her post by saying: “But for us, organizations like us, and the media who cover us, let’s start by speaking truth, even when it’s complicated and messy and hard.”
So here’s the complicated, messy truth: Google fired Lynn because his fierce criticism made it through to the outside; and Slaughter, as CEO of New America, was complicit in giving a corporate giant undue influence over an organization whose job it is to keep an eye on such abuses of corporate power.
The underlying API used to determine “toxicity” scores phrases like “I am a gay black woman” as 87 percent toxicity, and phrases like “I am a man” as the least toxic. The API, called Perspective, is made by Google’s Alphabet within its Jigsaw incubator.
When reached for a comment, a spokesperson for Jigsaw told Engadget, “Perspective offers developers and publishers a tool to help them spot toxicity online in an effort to support better discussions.” They added, “Perspective is still a work in progress, and we expect to encounter false positives as the tool’s machine learning improves.”
Hopefully, they’ll have this cracked quickly. As Engadget notes:
Unless Google anti-diversity creeper James Damore was the project lead for Perspective, it’s hard to imagine that the company would greenlight a product that thinks to identify as a black gay woman is toxic.
The massive spread of fake news has been identified as a major global risk and has been alleged to influence elections and threaten democracies. Communication, cognitive, social, and computer scientists are engaged in efforts to study the complex causes for the viral diffusion of digital misinformation and to develop solutions, while search and social media platforms are beginning to deploy countermeasures. However, to date, these efforts have been mainly informed by anecdotal evidence rather than systematic data. Here we analyze 14 million messages spreading 400 thousand claims on Twitter during and following the 2016 U.S. presidential campaign and election. We find evidence that social bots play a key role in the spread of fake news. Accounts that actively spread misinformation are significantly more likely to be bots. Automated accounts are particularly active in the early spreading phases of viral claims, and tend to target influential users. Humans are vulnerable to this manipulation, retweeting bots who post false news. Successful sources of false and biased claims are heavily supported by social bots. These results suggests that curbing social bots may be an effective strategy for mitigating the spread of online misinformation.
The Berkman Klein Center for Internet & Society at Harvard University today released a comprehensive analysis of online media and social media coverage of the 2016 presidential campaign. The report, “Partisanship, Propaganda, and Disinformation: Online Media and the 2016 U.S. Presidential Election,” documents how highly partisan right-wing sources helped shape mainstream press coverage and seize the public’s attention in the 18-month period leading up to the election.
“In this study, we document polarization in the media ecosystem that is distinctly asymmetric. Whereas the left half of our spectrum is filled with many media sources from center to left, the right half of the spectrum has a substantial gap between center and right. The core of attention from the center-right to the left is large mainstream media organizations of the center-left. The right-wing media sphere skews to the far right and is dominated by highly partisan news organizations,” co-author and principal investigator Yochai Benkler stated. In addition to Benkler, the report was authored by Robert Faris, Hal Roberts, Bruce Etling, Nikki Bourassa, and Ethan Zuckerman.
The fact that media coverage has become more polarized in general is not new, but the extent to which right-wing sites have become partisan is striking, the report says.
The study found that on the conservative side, more attention was paid to pro-Trump, highly partisan media outlets. On the liberal side, by contrast, the center of gravity was made up largely of long-standing media organizations. Robert Faris, the Berkman Klein Center’s research director, noted, “Consistent with concerns over echo chambers and filter bubbles, social media users on the left and the right rarely share material from outside their respective spheres, except where they find coverage that is favorable to their choice of candidate. A key difference between the right and left is that Trump supporters found substantial coverage favorable to their side in left and center-left media, particularly coverage critical of Clinton. In contrast, the messaging from right-wing media was consistently pro-Trump.” Conservative opposition to Trump was strongest in the center-right, the portion of the political spectrum that wielded the least influence in media coverage of the election.
Buzzfeed’s comprehensive study to the growing universe of partisan websites and Facebook pages about US politics reveals that in 2016 alone at least 187 new websites launched, and that the candidacy and election of Donald Trump has unleashed a golden age of aggressive, divisive political content that reaches a massive amount of people on Facebook.
This presentation by Scott Galloway should be watched by every one but especially regulators and elected officials.
As I’ve said, Facebook isn’t email, that’s how you know it’s a publisher not a platform. Users don’t communicate with each other, they communicate with Facebook. Any claim that Facebook isn’t a publisher is self-serving nonsense. Facebook decides which messages to highlight and which ones to bury. These are editorial decisions. It doesn’t matter if they are administered by an algorithm, they were designed by humans.
Facebook’s objective is to increase your participation (“engagement” in their language) and that means a structural bias towards outrage because, empirically, that’s what people respond to. Outrage is profitable for Facebook, especially since tracking what outrages individuals makes advertising more effective. But outrage is corrosive for both civilised behaviour and a well-informed civil society. It is especially dangerous when outrage is the primary news source, an inevitability when the print alternative is no longer supported by advertising.
In economic terms, Facebook is only profitable because outrage is an external cost the public bears. A recognised economic principle is that the polluter pays, so one way forward is to devise a mechanism to charge Facebook to encourage it to ameliorate its behaviour. Personally, I’d just ban adverts from Facebook, force it to charge a subscription and regulate it as a utility whilst we sort out how to separate the socnet layer (identity/ publishing) from the content.
Only Scott Galloway could devise such a brilliant graphic.