George Lakoff analyses Trump’s twitter strategy (whether unconscious or not).
But there’s more to it than that. Silicon Valley elites justify the subsidies in the name of monopolistic growth expectations and the building of “eco-systems”*. They believe if monopoly status is achieved, profitability will follow naturally from that point.
Yet, as FT Alphaville has long maintained, there is no reason to assume Uber’s obliteration of local competition across the planet will create a sustainable business in the long term. Costs are costs, even if you’re a monopoly. As long as people have cheaper alternatives (public transport, legs), they will defect if the break-even price is higher than their inconvenience tolerance threshold.
The fact Silicon Valley thinks otherwise is sadly symptomatic of the emperor’s new clothes groupthink dominating the sector. Though it does explain the sector’s obsession with popularising the idea that public transport can be done away with. (Less investment in public transport will lead to fewer competitively priced alternatives, empowering the Uber monopoly in the long run).
Note the FT is addressing the basic economics of the model, not the ludicrous “all the drivers are self-employed, nothing to do with us, Guv, we’re just a tech platform” position Uber presented when losing a recent employment tribunal case. As the FT noted then: Uber could indeed be no mere provider of transportation services in one way. Its true business edge is legal and regulatory arbitrage.
NATO’s Handbook of Russian Information Warfare is an introductory guide to Russia’s doctrine and activities in this field, including elements of cyber warfare. The handbook’s target audience is NATO servicemen and officials who are unfamiliar with Russian principles of warfighting, but require an introduction to this essential element of how Russia projects state power.
If you walk into a newsagent, and pick up a copy of the Sunday Sport (American readers, think the National Enquirer but with a lower proportion of true stories), you have a number of contextual clues that suggest a story with the headline “Ed Miliband’s Dad Killed My Kitten” might not be entirely true. The prominent soft porn and chatline adverts; the placement alongside other stories like “Bus found buried at south pole” and “World War 2 Bomber Found on Moon”; and the fact that the paper is in its 30th year of publishing, letting readers build up a consistent view about the title based on previous experience.
If a friend shares that same article on Facebook, something very different happens. The story is ripped from its context, and presented as a standard Facebook post. At the top, most prominently, is the name and photo of the person you know in real life who is sharing the piece. That gives the article the tacit support and backing of someone you really know, which makes it far more likely to slip past your bullshit detector.
Next, Facebook pulls the top image, headline, and normally an introductory paragraph, and formats it in its own style: the calming blue text, the standard system font, and the picture cropped down to a standard aspect ratio. Sometimes, that content will be enough for a canny reader to realise something is up: poor spelling, bad photoshopping, or plain nonsensical stories, can’t be massaged away by Facebook’s design sense.
Nonetheless, the fact that every link on Facebook is presented in the same way serves the average out the credibility of all the posts on the site. The Sunday Sport’s credibility gets a boost, while the Guardian’s gets a drop: after all, everyone knows you can’t trust everything you read on Facebook.
Then, at the very bottom of the shared story, in small grey text, is the actual source. It’s not prominent, and because it’s simply the main section of a URL, it’s very easy to miss hoaxes. Are you sure you could spot the difference between ABC.GO.COM, the American broadcaster’s website, and ABC.CO.COM, a domain that was briefly used to spread a hoax story about Obama overturning the results of the election?
Then below all of that, are three further buttons: like, share and comment. All three help spread the story, whether you support it or not, because Facebook’s algorithm views engagement with a post as a reason for showing it to more people. And while all three get a button to themselves, nowhere does Facebook provide a similar call to action for the most important response of all: clicking through, and reading the whole story in its original context.
For that, you’ll have to scroll back up – but by then, you’ve already moved on to the next article on your newsfeed. And even if you reacted with scepticism when you first read the headline, as time goes by, your initial reaction gets lost, and eventually it becomes one of those things you “just know”.
It’s not an accident that Facebook is designed this way. The company extensively tests its site, to ensure its layout is fully optimised for pursuing its goals.
Unfortunately, Facebook doesn’t A/B test its site for public goods like “functioning media ecosystem” or “supporting extremist politicians”. Instead, the company’s goals are to maximise time spent on site, to try and make sure readers come back every day and continue to share posts, engage with content, and, ultimately, click on the adverts that have made the social network the fifth largest company in the world by market cap.
So, here’s what Facebook could do to help deal not with fake news, but with the negative effects it has on our society: de-emphasise who shared a story into your timeline, instead branding it with the logo and name of the publication itself, and encourage readers to, well, read, before or instead of liking, sharing and commenting.
Doing so might not be great for Facebook’s bottom line, of course. The site would be less “sticky”, users would be more likely to click away and not come back, and the amount of sharing would drop. But maybe it’s time for Zuckerberg to take one for the team.
As previously noted, never attribute to incompetence that which can be explained by differing incentive structures.
I am unable to give details, because these companies spoke with me under condition of anonymity. But this all is consistent with what Verisign is reporting. Verisign is the registrar for many popular top-level Internet domains, like .com and .net. If it goes down, there’s a global blackout of all websites and e-mail addresses in the most common top-level domains. Every quarter, Verisign publishes a DDoS trends report. While its publication doesn’t have the level of detail I heard from the companies I spoke with, the trends are the same: “in Q2 2016, attacks continued to become more frequent, persistent, and complex.”
There’s more. One company told me about a variety of probing attacks in addition to the DDoS attacks: testing the ability to manipulate Internet addresses and routes, seeing how long it takes the defenders to respond, and so on. Someone is extensively testing the core defensive capabilities of the companies that provide critical Internet services.
Who would do this? It doesn’t seem like something an activist, criminal, or researcher would do. Profiling core infrastructure is common practice in espionage and intelligence gathering. It’s not normal for companies to do that. Furthermore, the size and scale of these probes—and especially their persistence—points to state actors. It feels like a nation’s military cybercommand trying to calibrate its weaponry in the case of cyberwar. It reminds me of the U.S.’s Cold War program of flying high-altitude planes over the Soviet Union to force their air-defense systems to turn on, to map their capabilities.
Bruce Schneier is a serious security technologist. He knows his onions. This looks serious.
Now, this is a home page.
Washington Post looks as the information about you that Facebook sells to advertisers. It’s a long list: age, home value, expectant parents, users who are “heavy” buyers of beer, wine or spirits etc etc etc.
If you’re not sure what they even are, read this.
The Wire Cutter explains how Amazon reviews can be influenced e.g. give away products for free (or sell them at a deep discount) to potential customers vetted (by Amazon in the case of the Vine program) for the helpfulness of their reviews, in exchange for an “honest review.”
Fakespot is this site mentioned that allows you to paste the link to any Amazon product and receive a score regarding the likelihood of fake reviews.