IKEA has bought TaskRabbit

Recode:

Swedish home goods giant Ikea Group has bought TaskRabbit, according to sources close to the situation.

The price of the deal could not be determined, but the contract labor marketplace company has raised about $50 million since it was founded nine years ago. Sources added that TaskRabbit will become an independent subsidiary within Ikea and that CEO Stacy Brown-Philpot and its staff would remain.

A fascinating strategic acquisition.

 

“From a branding perspective”

The Handmaid’s Tale Season 1 Episode 8, “Jezebels“:

Commander Price: Maybe the wives should be there. For the act. It would be less of a violation. There is scriptural precedent.
The Commander: “Act” may not be the best name, from a branding perspective. “The Ceremony”?
Commander Guthrie: Sounds good. Nice and Godly. The wives would eat that shit up.

[My emphasis.] THT is the single most disturbing piece of sci-fi dystopia I’ve watched, and that’s without the sexual politics.

Integrity of the German elections

Mark Zuckerberg says that Facebook has “been working to ensure the integrity of the German elections this weekend“.  Um, OK: a private company supporting democracy (well, maybe – they won’t actually go into details) in a far away country.  The joke’s on us, guys.  We do not have to put up with this.

 

Google Allowed Advertisers To Target People Searching Racist Phrases

Buzzfeed:

Google, the world’s biggest advertising platform, allows advertisers to specifically target ads to people typing racist and bigoted terms into its search bar, BuzzFeed News has discovered. Not only that, Google will suggest additional racist and bigoted terms once you type some into its ad-buying tool.

Type “White people ruin,” as a potential advertising keyword into Google’s ad platform, and Google will suggest you run ads next to searches including “black people ruin neighborhoods.” Type “Why do Jews ruin everything,” and Google will suggest you run ads next to searches including “the evil jew” and “jewish control of banks.”

BuzzFeed News ran an ad campaign targeted to all these keywords and others this week. The ads went live and were visible when we searched for the keywords we’d selected. Google’s ad buying platform tracked the ad views. The issue is not unique to Google. On Thursday, ProPublica reported a similar issue with Facebook’s ad targeting system.

Facebook Enabled Advertisers to Reach ‘Jew Haters’

ProRepublica:

Want to market Nazi memorabilia, or recruit marchers for a far-right rally? Facebook’s self-service ad-buying platform had the right audience for you.

Until this week, when we asked Facebook about it, the world’s largest social network enabled advertisers to direct their pitches to the news feeds of almost 2,300 people who expressed interest in the topics of “Jew hater,” “How to burn jews,” or, “History of ‘why jews ruin the world.’”

To test if these ad categories were real, we paid $30 to target those groups with three “promoted posts” — in which a ProPublica article or post was displayed in their news feeds. Facebook approved all three ads within 15 minutes.

After we contacted Facebook, it removed the anti-Semitic categories — which were created by an algorithm rather than by people — and said it would explore ways to fix the problem, such as limiting the number of categories available or scrutinizing them before they are displayed to buyers.

Regular readers will have spotted that I’m not a big fan (just say no, kids) of Facebook .  This is a new low.

Automated Crowdturfing Attacks and Defenses in Online Review Systems

Via Schneier on Security, a research paper on computer generated product reviews:

Malicious crowdsourcing forums are gaining traction as sources of spreading misinformation online, but are limited by the costs of hiring and managing human workers. In this paper, we identify a new class of attacks that leverage deep learning language models (Recurrent Neural Networks or RNNs) to automate the generation of fake online reviews for products and services. Not only are these attacks cheap and therefore more scalable, but they can control rate of content output to eliminate the signature burstiness that makes crowdsourced campaigns easy to detect.

Using Yelp reviews as an example platform, we show how a two phased review generation and customization attack can produce reviews that are indistinguishable by state-of-the-art statistical detectors. We conduct a survey-based user study to show these reviews not only evade human detection, but also score high on “usefulness” metrics by users. Finally, we develop novel automated defenses against these attacks, by leveraging the lossy transformation introduced by the RNN training and generation cycle. We consider countermeasures against our mechanisms, show that they produce unattractive cost-benefit tradeoffs for attackers, and that they can be further curtailed by simple constraints imposed by online service providers.

This could get very, very tricky.  AI will force us to give up on web anonymity.

Demon-Haunted World

From Locus Online, an essay by Cory Doctorow looking at cheating in a world run by software.

Unlike hardware, software is smart enough to cheat depending on who’s looking (VW’s diesel cheat fooled regulators, it wasn’t for customers).   Anti-consumer laws compound this.  For example, the US Computer Fraud and Abuse Act (1986) makes it a crime, with jail-time, to violate a company’s terms of service. Logging into a website under a fake ID to see if it behaves differently depending on who it is talking to is thus a potential felony, provided that doing so is banned in the small-print clickthrough agreement when you sign up.

What will be the residual effect of New America’s split with Open Markets?

Cardiff Garcia in the Financial Times:

On Wednesday morning the New York Times reported that New America, a US think tank, had parted ways with its Open Markets team after Alphabet executive chairman Eric Schmidt complained about a press release.

Google and Schmidt are donors to New America, and Schmidt himself was chairman of the think tank until last year. The press release from Open Markets director Barry Lynn had praised the European Commission’s decision in June to fine Google for breaching EU anti-trust rules. In the aftermath of the press release, according to the Times, “word of Mr. Schmidt’s displeasure rippled through New America, which employs more than 200 people, including dozens of researchers, writers and scholars, most of whom work in sleek Washington offices where the main conference room is called the ‘Eric Schmidt Ideas Lab.’”

Given that the Open Markets program’s mandate included analysing the political power and societal influence held by potential monopolists and oligopolists, there’s an obvious irony in its having been discarded because of a company’s decision to wield this same influence.

New America CEO Anne-Marie Slaughter has responded to the New York Times report, saying that Google had not “lobbied New America to expel Open Markets because of this press release”, and that Lynn has now been fired for “his repeated refusal to adhere to New America’s standards of openness and institutional collegiality”. Slaughter doesn’t deny or address her alleged emails to Lynn reported by the Times, including one in which she wrote: “We are in the process of trying to expand our relationship with Google on some absolutely key points… just THINK about how you are imperiling funding for others.”

Like some many stories of this ilk there’s a blizzard of counter-claims.  I’ve not gone through everything in detail but here’s the FT:

In any case, unless more detail emerges, it’s hard to accept the assertion from New America that the decision to split from the Open Markets group had nothing to do with the content produced by the team. Why would a problem of coordination be punished so severely, with expulsion no less? The email from Slaughter to Lynn on June 23rd — “just THINK about how you are imperiling funding for others”, she writes — is especially damning.

And The Register:

The fact is that if the financial relationship with Google and Schmidt wasn’t there, and if Slaughter wasn’t an old friend of Schmidt’s, there would not have been any concern over Lynn’s statement in the first place. It was, after all, a personal statement from a think tank: hardly draft legislation or anti-trust charges.

That Lynn felt the need to push his statement out without going through Slaughter, and the fact that she had such a strong reaction when he didn’t, combined with the virtual certainty that Schmidt called soon after to express his annoyance, is as clear an example of soft money influence as you will ever find.

Slaughter concludes her post by saying: “But for us, organizations like us, and the media who cover us, let’s start by speaking truth, even when it’s complicated and messy and hard.”

So here’s the complicated, messy truth: Google fired Lynn because his fierce criticism made it through to the outside; and Slaughter, as CEO of New America, was complicit in giving a corporate giant undue influence over an organization whose job it is to keep an eye on such abuses of corporate power.

 

Google’s comment-ranking system will be a hit with the alt-right

Engadget:

A recent, sprawling Wired feature outlined the results of its analysis on toxicity in online commenters across the United States. Unsurprisingly, it was like catnip for everyone who’s ever heard the phrase “don’t read the comments.” According to “The Great Tech Panic: Trolls Across America,” Vermont has the most toxic online commenters, whereas Sharpsburg, Georgia, “is the least-toxic city in the US.”

There’s just one problem.

The underlying API used to determine “toxicity” scores phrases like “I am a gay black woman” as 87 percent toxicity, and phrases like “I am a man” as the least toxic. The API, called Perspective, is made by Google’s Alphabet within its Jigsaw incubator.

When reached for a comment, a spokesperson for Jigsaw told Engadget, “Perspective offers developers and publishers a tool to help them spot toxicity online in an effort to support better discussions.” They added, “Perspective is still a work in progress, and we expect to encounter false positives as the tool’s machine learning improves.”

Hopefully, they’ll have this cracked quickly.   As Engadget notes:

Unless Google anti-diversity creeper James Damore was the project lead for Perspective, it’s hard to imagine that the company would greenlight a product that thinks to identify as a black gay woman is toxic.