The Department of Justice wants access to encrypted consumer devices but promises not to infiltrate business products or affect critical infrastructure. Yet that’s not possible, because there is no longer any difference between those categories of devices. Consumer devices are critical infrastructure. They affect national security. And it would be foolish to weaken them, even at the request of law enforcement.
In his keynote address at the International Conference on Cybersecurity, Attorney General William Barr argued that companies should weaken encryption systems to gain access to consumer devices for criminal investigations. Barr repeated a common fallacy about a difference between military-grade encryption and consumer encryption: “After all, we are not talking about protecting the nation’s nuclear launch codes. Nor are we necessarily talking about the customized encryption used by large business enterprises to protect their operations. We are talking about consumer products and services such as messaging, smart phones, e-mail, and voice and data applications.”
The thing is, that distinction between military and consumer products largely doesn’t exist. All of those “consumer products” Barr wants access to are used by government officials — heads of state, legislators, judges, military commanders and everyone else — worldwide. They’re used by election officials, police at all levels, nuclear power plant operators, CEOs and human rights activists. They’re critical to national security as well as personal security.
This wasn’t true during much of the Cold War. Before the Internet revolution, military-grade electronics were different from consumer-grade. Military contracts drove innovation in many areas, and those sectors got the cool new stuff first. That started to change in the 1980s, when consumer electronics started to become the place where innovation happened. The military responded by creating a category of military hardware called COTS: commercial off-the-shelf technology. More consumer products became approved for military applications. Today, pretty much everything that doesn’t have to be hardened for battle is COTS and is the exact same product purchased by consumers. And a lot of battle-hardened technologies are the same computer hardware and software products as the commercial items, but in sturdier packaging.
The day before we meet, the tech site Gizmodo publishes a piece on how extremist channels remain on YouTube, despite the new policies. In the face of fairly constant criticism, does Wojcicki ever feel like walking away? “No, I don’t. Because I feel a commitment to solving these challenges,” she says. “I care about the legacy that we leave and about how history will view this point in time. Here’s this new technology, we’ve enabled all these new voices. What did we do? Did we decide to shut it down and say only a small set of people will have their voice? Who will decide that, and how will it be decided? Or do we find a way to enable all these different voices and perspectives, but find a way to manage the abuse of it? I’m focused on making sure we can manage the challenges of having an open platform in a responsible way.”
My emphasis. Her job depends upon her denying that there is a difference between merely uploading a video (and it being lost in the millions of others) and deliberately recommending it to others. The YouTube recommendation algorithm is simply toxic. And, like polluters everywhere, they do it because it makes them money.
When you start to think about all of the ways Superhuman can be used to violate privacy, you really wonder why The New York Times spent 1,200 words on a tongue-bath that doesn’t even talk meaningfully about privacy issues at all. We don’t need journalism to tell us where venture capitalists are putting other people’s money. We need it to examine the ramifications of the technology we are pushing into the world and in what ways it might shift the Overton Window for Ethics in either helpful or hurtful ways.
Arunesh Mathur et al, Princeton:
Dark patterns are user interface design choices that benefit an online service by coercing, steering, or deceiving users into making unintended and potentially harmful decisions. We present automated techniques that enable experts to identify dark patterns on a large set of websites. Using these techniques, we study shopping websites, which often use dark patterns these to influence users into making more purchases or disclosing more information than they would otherwise. Analyzing ∼53K product pages from ∼11K shopping websites, we discover 1,841 dark pattern instances, together representing 15 types and 7 categories. We examine the underlying influence of these dark patterns, documenting their potential harm on user decision-making. We also examine these dark patterns for deceptive practices, and find 183 websites that engage in such practices. Finally, we uncover 22 third-party entities that offer dark patterns as a turnkey solution. Based on our findings, we make recommendations for stakeholders including researchers and regulators to study, mitigate, and minimize the use of these patterns.
An online game in which people play the role of propaganda producers to help them identify real world disinformation has been shown to increase “psychological resistance” to fake news, according to a study of 15,000 participants.
In February 2018, University of Cambridge researchers helped launch the browser game Bad News. Thousands of people spent fifteen minutes completing it, with many allowing the data to be used for a study.
Players stoke anger and fear by manipulating news and social media within the simulation: deploying twitter bots, photo-shopping evidence, and inciting conspiracy theories to attract followers – all while maintaining a “credibility score” for persuasiveness.
“Research suggests that fake news spreads faster and deeper than the truth, so combatting disinformation after-the-fact can be like fighting a losing battle,” said Dr Sander van der Linden, Director of the Cambridge Social Decision-Making Lab.
Judge Chhabria was skeptical of Snyder’s privacy nonexistence argument at times, which he rejected as treating personal privacy as a binary, “like either you have a full expectation of privacy, or you have no expectation of privacy at all,” the judge put it at one point. Chhabria continued with a relatable hypothetical:
If I share [information] with ten people, that doesn’t eliminate my expectation of privacy. It might diminish it, but it doesn’t eliminate it. And if I share something with ten people on the understanding that the entity that is helping me share it will not further disseminate it to a thousand companies, I don’t understand why I don’t have — why that’s not a violation of my expectation of privacy.
Snyder responded with an incredible metaphor for how Facebook sees your use of its services — legally, at least:
Let me give you a hypothetical of my own. I go into a classroom and invite a hundred friends. This courtroom. I invite a hundred friends, I rent out the courtroom, and I have a party. And I disclose — And I disclose something private about myself to a hundred people, friends and colleagues. Those friends then rent out a 100,000-person arena, and they rebroadcast those to 100,000 people. I have no cause of action because by going to a hundred people and saying my private truths, I have negated any reasonable expectation of privacy, because the case law is clear.
And there it is, in broad daylight: Using Facebook is a depressing party taking place in a courtroom, for some reason, that’s being simultaneously broadcasted to a 100,000-person arena on a sort of time delay. If you show up at the party, don’t be mad when your photo winds up on the Jumbotron. That is literally the company’s legal position.
Don’t pretend you weren’t warned.
Kevin Litman-Navarro in The New York Times has an excellent analysis of privacy policies. Recommended for the excellent use of graphics as well as noting how Google’s policy has changed over time, the incomprehensibility of AirBnb’s policy and the clear simple language used by the BBC.
Platforms might have been something new, but they sure did a lot of things that previous information intermediaries had. “Their choices about what can appear, how it is organized, how it is monetized, what can be removed and why, and what the technical architecture allows and prohibits, are all real and substantive interventions into the contours of public discourse,” Gillespie wrote.
Yet for years the internet platforms mostly denied that they were much of an intervention at all. When Senator Joe Lieberman tried to get YouTube to take down what he characterized as Islamist training videos in 2008, the YouTube team responded with free-speech bromides. “YouTube encourages free speech and defends everyone’s right to express unpopular points of view,” they wrote. “We believe that YouTube is a richer and more relevant platform for users precisely because it hosts a diverse range of views, and rather than stifle debate we allow our users to view all acceptable content and make up their own minds.”
Facebook drew on that sense of being “just a platform” after conservatives challenged what they saw as the company’s liberal bias in mid-2016. Zuckerberg began to use—at least in public—the line that Facebook was “a platform for all ideas.”
But that prompted many people to ask: What about awful, hateful ideas? Why, exactly, should Facebook host them, algorithmically serve them up, or lead users to groups filled with them?
These companies are continuing to make their platform arguments, but every day brings more conflicts that they seem unprepared to resolve. The platform defense used to shut down the why questions: Why should YouTube host conspiracy content? Why should Facebook host provably false information? Facebook, YouTube, and their kin keep trying to answer, We’re platforms! But activists and legislators are now saying, So what? “I think they have proven—by not taking down something they know is false—that they were willing enablers of the Russian interference in our election,” Nancy Pelosi said after the altered-video fracas.
Given how powerful and flexible as the rhetoric has been, the idea of the platform will not simply exit stage right. “The platform” once perfumed the naive, meretricious, or odious actions that allowed these companies to expand. But as the term rots, it has begun to stink, and anybody who catches a whiff of it might notice what had been masked. These companies are out to grow their businesses, and every other thing is a means to that end.
Dave Winer hits the nail on the head:
Journalism has been very conflicted about Craig Newmark. Truth is he isn’t responsible for anything other than making a product that people wanted. The news industry could have done it, but for some reason didn’t.
The first column represents each cause’s share of US deaths; the second the share of Google searches each receives; third, the relative article mentions in the New York Times; and finally article mentions in The Guardian.
The coverage in both newspapers here is strikingly similar. And the discrepancy between what we die actually from and what we get informed of in the media is what stands out:
- around one-third of the considered causes of deaths resulted from heart disease, yet this cause of death receives only 2-3 percent of Google searches and media coverage;
- just under one-third of the deaths came from cancer; we actually google cancer a lot (37 percent of searches) and it is a popular entry here on our site; but it receives only 13-14 percent of media coverage;
- we searched for road incidents more frequently than their share of deaths, however, they receive much less attention in the news;
- when it comes to deaths from strokes, Google searches and media coverage are surprisingly balanced;
- the largest discrepancies concern violent forms of death: suicide, homicide and terrorism. All three receive much more relative attention in Google searches and media coverage than their relative share of deaths. When it comes to the media coverage on causes of death, violent deaths account for more than two-thirds of coverage in the New York Times and The Guardian but account for less than 3 percent of the total deaths in the US.
What’s interesting is that Americans search on Google is a much closer reflection of what kills us than what is presented in the media. One way to think about it is that media outlets may produce content that they think readers are most interested in, but this is not necessarily reflected in our preferences when we look for information ourselves.