A recent, sprawling Wired feature outlined the results of its analysis on toxicity in online commenters across the United States. Unsurprisingly, it was like catnip for everyone who’s ever heard the phrase “don’t read the comments.” According to “The Great Tech Panic: Trolls Across America,” Vermont has the most toxic online commenters, whereas Sharpsburg, Georgia, “is the least-toxic city in the US.”
There’s just one problem.
Mark Zuckerberg says that Facebook has “been working to ensure the integrity of the German elections this weekend“. Um, OK: a private company supporting democracy (well, maybe – they won’t actually go into details) in a far away country. The joke’s on us, guys. We do not have to put up with this.
Google, the world’s biggest advertising platform, allows advertisers to specifically target ads to people typing racist and bigoted terms into its search bar, BuzzFeed News has discovered. Not only that, Google will suggest additional racist and bigoted terms once you type some into its ad-buying tool.
Type “White people ruin,” as a potential advertising keyword into Google’s ad platform, and Google will suggest you run ads next to searches including “black people ruin neighborhoods.” Type “Why do Jews ruin everything,” and Google will suggest you run ads next to searches including “the evil jew” and “jewish control of banks.”
BuzzFeed News ran an ad campaign targeted to all these keywords and others this week. The ads went live and were visible when we searched for the keywords we’d selected. Google’s ad buying platform tracked the ad views. The issue is not unique to Google. On Thursday, ProPublica reported a similar issue with Facebook’s ad targeting system.
Want to market Nazi memorabilia, or recruit marchers for a far-right rally? Facebook’s self-service ad-buying platform had the right audience for you.
Until this week, when we asked Facebook about it, the world’s largest social network enabled advertisers to direct their pitches to the news feeds of almost 2,300 people who expressed interest in the topics of “Jew hater,” “How to burn jews,” or, “History of ‘why jews ruin the world.’”
To test if these ad categories were real, we paid $30 to target those groups with three “promoted posts” — in which a ProPublica article or post was displayed in their news feeds. Facebook approved all three ads within 15 minutes.
After we contacted Facebook, it removed the anti-Semitic categories — which were created by an algorithm rather than by people — and said it would explore ways to fix the problem, such as limiting the number of categories available or scrutinizing them before they are displayed to buyers.
Regular readers will have spotted that I’m not a big fan (just say no, kids) of Facebook . This is a new low.
Malicious crowdsourcing forums are gaining traction as sources of spreading misinformation online, but are limited by the costs of hiring and managing human workers. In this paper, we identify a new class of attacks that leverage deep learning language models (Recurrent Neural Networks or RNNs) to automate the generation of fake online reviews for products and services. Not only are these attacks cheap and therefore more scalable, but they can control rate of content output to eliminate the signature burstiness that makes crowdsourced campaigns easy to detect.
Using Yelp reviews as an example platform, we show how a two phased review generation and customization attack can produce reviews that are indistinguishable by state-of-the-art statistical detectors. We conduct a survey-based user study to show these reviews not only evade human detection, but also score high on “usefulness” metrics by users. Finally, we develop novel automated defenses against these attacks, by leveraging the lossy transformation introduced by the RNN training and generation cycle. We consider countermeasures against our mechanisms, show that they produce unattractive cost-benefit tradeoffs for attackers, and that they can be further curtailed by simple constraints imposed by online service providers.
This could get very, very tricky. AI will force us to give up on web anonymity.
From Locus Online, an essay by Cory Doctorow looking at cheating in a world run by software.
Unlike hardware, software is smart enough to cheat depending on who’s looking (VW’s diesel cheat fooled regulators, it wasn’t for customers). Anti-consumer laws compound this. For example, the US Computer Fraud and Abuse Act (1986) makes it a crime, with jail-time, to violate a company’s terms of service. Logging into a website under a fake ID to see if it behaves differently depending on who it is talking to is thus a potential felony, provided that doing so is banned in the small-print clickthrough agreement when you sign up.
Cardiff Garcia in the Financial Times:
On Wednesday morning the New York Times reported that New America, a US think tank, had parted ways with its Open Markets team after Alphabet executive chairman Eric Schmidt complained about a press release.
Google and Schmidt are donors to New America, and Schmidt himself was chairman of the think tank until last year. The press release from Open Markets director Barry Lynn had praised the European Commission’s decision in June to fine Google for breaching EU anti-trust rules. In the aftermath of the press release, according to the Times, “word of Mr. Schmidt’s displeasure rippled through New America, which employs more than 200 people, including dozens of researchers, writers and scholars, most of whom work in sleek Washington offices where the main conference room is called the ‘Eric Schmidt Ideas Lab.’”
Given that the Open Markets program’s mandate included analysing the political power and societal influence held by potential monopolists and oligopolists, there’s an obvious irony in its having been discarded because of a company’s decision to wield this same influence.
New America CEO Anne-Marie Slaughter has responded to the New York Times report, saying that Google had not “lobbied New America to expel Open Markets because of this press release”, and that Lynn has now been fired for “his repeated refusal to adhere to New America’s standards of openness and institutional collegiality”. Slaughter doesn’t deny or address her alleged emails to Lynn reported by the Times, including one in which she wrote: “We are in the process of trying to expand our relationship with Google on some absolutely key points… just THINK about how you are imperiling funding for others.”
Like some many stories of this ilk there’s a blizzard of counter-claims. I’ve not gone through everything in detail but here’s the FT:
In any case, unless more detail emerges, it’s hard to accept the assertion from New America that the decision to split from the Open Markets group had nothing to do with the content produced by the team. Why would a problem of coordination be punished so severely, with expulsion no less? The email from Slaughter to Lynn on June 23rd — “just THINK about how you are imperiling funding for others”, she writes — is especially damning.
And The Register:
The fact is that if the financial relationship with Google and Schmidt wasn’t there, and if Slaughter wasn’t an old friend of Schmidt’s, there would not have been any concern over Lynn’s statement in the first place. It was, after all, a personal statement from a think tank: hardly draft legislation or anti-trust charges.
That Lynn felt the need to push his statement out without going through Slaughter, and the fact that she had such a strong reaction when he didn’t, combined with the virtual certainty that Schmidt called soon after to express his annoyance, is as clear an example of soft money influence as you will ever find.
Slaughter concludes her post by saying: “But for us, organizations like us, and the media who cover us, let’s start by speaking truth, even when it’s complicated and messy and hard.”
So here’s the complicated, messy truth: Google fired Lynn because his fierce criticism made it through to the outside; and Slaughter, as CEO of New America, was complicit in giving a corporate giant undue influence over an organization whose job it is to keep an eye on such abuses of corporate power.
The underlying API used to determine “toxicity” scores phrases like “I am a gay black woman” as 87 percent toxicity, and phrases like “I am a man” as the least toxic. The API, called Perspective, is made by Google’s Alphabet within its Jigsaw incubator.
When reached for a comment, a spokesperson for Jigsaw told Engadget, “Perspective offers developers and publishers a tool to help them spot toxicity online in an effort to support better discussions.” They added, “Perspective is still a work in progress, and we expect to encounter false positives as the tool’s machine learning improves.”
Hopefully, they’ll have this cracked quickly. As Engadget notes:
Unless Google anti-diversity creeper James Damore was the project lead for Perspective, it’s hard to imagine that the company would greenlight a product that thinks to identify as a black gay woman is toxic.
The massive spread of fake news has been identified as a major global risk and has been alleged to influence elections and threaten democracies. Communication, cognitive, social, and computer scientists are engaged in efforts to study the complex causes for the viral diffusion of digital misinformation and to develop solutions, while search and social media platforms are beginning to deploy countermeasures. However, to date, these efforts have been mainly informed by anecdotal evidence rather than systematic data. Here we analyze 14 million messages spreading 400 thousand claims on Twitter during and following the 2016 U.S. presidential campaign and election. We find evidence that social bots play a key role in the spread of fake news. Accounts that actively spread misinformation are significantly more likely to be bots. Automated accounts are particularly active in the early spreading phases of viral claims, and tend to target influential users. Humans are vulnerable to this manipulation, retweeting bots who post false news. Successful sources of false and biased claims are heavily supported by social bots. These results suggests that curbing social bots may be an effective strategy for mitigating the spread of online misinformation.
The Berkman Klein Center for Internet & Society at Harvard University today released a comprehensive analysis of online media and social media coverage of the 2016 presidential campaign. The report, “Partisanship, Propaganda, and Disinformation: Online Media and the 2016 U.S. Presidential Election,” documents how highly partisan right-wing sources helped shape mainstream press coverage and seize the public’s attention in the 18-month period leading up to the election.
“In this study, we document polarization in the media ecosystem that is distinctly asymmetric. Whereas the left half of our spectrum is filled with many media sources from center to left, the right half of the spectrum has a substantial gap between center and right. The core of attention from the center-right to the left is large mainstream media organizations of the center-left. The right-wing media sphere skews to the far right and is dominated by highly partisan news organizations,” co-author and principal investigator Yochai Benkler stated. In addition to Benkler, the report was authored by Robert Faris, Hal Roberts, Bruce Etling, Nikki Bourassa, and Ethan Zuckerman.
The fact that media coverage has become more polarized in general is not new, but the extent to which right-wing sites have become partisan is striking, the report says.
The study found that on the conservative side, more attention was paid to pro-Trump, highly partisan media outlets. On the liberal side, by contrast, the center of gravity was made up largely of long-standing media organizations. Robert Faris, the Berkman Klein Center’s research director, noted, “Consistent with concerns over echo chambers and filter bubbles, social media users on the left and the right rarely share material from outside their respective spheres, except where they find coverage that is favorable to their choice of candidate. A key difference between the right and left is that Trump supporters found substantial coverage favorable to their side in left and center-left media, particularly coverage critical of Clinton. In contrast, the messaging from right-wing media was consistently pro-Trump.” Conservative opposition to Trump was strongest in the center-right, the portion of the political spectrum that wielded the least influence in media coverage of the election.