AI Copernicus ‘discovers’ that Earth orbits the Sun

Nature:

Astronomers took centuries to figure it out. But now, a machine-learning algorithm inspired by the brain has worked out that it should place the Sun at the centre of the Solar System, based on how movements of the Sun and Mars appear from Earth. The feat is one the first tests of a technique that researchers hope they can use to discover new laws of physics, and perhaps to reformulate quantum mechanics, by finding patterns in large data sets. The results are due to appear in Physical Review Letters1.

Physicist Renato Renner at the Swiss Federal Institute of Technology (ETH) in Zurich and his collaborators wanted to design an algorithm that could distill large data sets down into a few basic formulae, mimicking the way that physicists come up with concise equations like E = mc2. To do this, the researchers had to design a new type of neural network, a machine-learning system inspired by the structure of the brain.

Conventional neural networks learn to recognize objects — such as images or sounds — by training on huge data sets. They discover general features — for example, ‘four legs’ and ‘pointy ears’ might be used to identify cats. They then encode those features in mathematical ‘nodes’, the artificial equivalent of neurons. But rather than distilling that information into a few, easily interpretable rules, as physicists do, neural networks are something of a black box, spreading their acquired knowledge across thousands or even millions of nodes in ways that are unpredictable and difficult to interpret.

So Renner’s team designed a kind of ‘lobotomized’ neural network: two sub-networks that were connected to each other through only a handful of links. The first sub-network would learn from the data, as in a typical neural network, and the second would use that ‘experience’ to make and test new predictions. Because few links connected the two sides, the first network was forced to pass information to the other in a condensed format. Renner likens it to how an adviser might pass on their acquired knowledge to a student.

First they came for the astronomers and I did nothing.

Rock climbing and the economics of innovation

Richard Jones:

The rock climber Alex Honnold’s free, solo ascent of El Capitan is inspirational in many ways. For economist John Cochrane, watching the film of the ascent has prompted a blogpost: “What the success of rock climbing tells us about economic growth”. He concludes that “Free Solo is a great example of the expansion of ability, driven purely by advances in knowledge, untethered from machines.” As an amateur in both rock climbing and innovation theory, I can’t resist some comments of my own. I think it’s all a bit more complicated than Cochrane thinks. In particular his argument that Honnold’s success tells us that knowledge – and the widespread communication of knowledge – is more important than new technology in driving economic growth doesn’t really stand up.

The film “Free Solo” shows Honnold’s 2017 ascent of the 3000 ft cliff El Capitan, in the Yosemite Valley, California. The climb was done free (i.e. without the use of artificial aids like pegs to make progress), and solo – without ropes or any other aids to safety. How come, Cochrane asks, rock climbers have got so much better at climbing since El Cap’s first ascent in 1958, which took 47 days, done with “siege tactics” and every artificial aid available at the time? “There is essentially no technology involved. OK, Honnold wears modern climbing boots, which have very sticky rubber. But that’s about it. And reasonably sticky rubber has been around for a hundred years or so too.”

Hold on a moment here – no technology? I don’t think the history of climbing really bears this out. Even the exception that Cochrane allows, sticky rubber boots, is more complicated than he thinks.

Maybe It’s Not YouTube’s Algorithm That Radicalizes People

Wired:

YouTube is the biggest social media platform in the country, and, perhaps, the most misunderstood. Over the past few years, the Google-owned platform has become a media powerhouse where political discussion is dominated by right-wing channels offering an ideological alternative to established news outlets. And, according to new research from Penn State University, these channels are far from fringe—they’re the new mainstream, and recently surpassed the big three US cable news networks in terms of viewership.

The paper, written by Penn State political scientists Kevin Munger and Joseph Phillips, tracks the explosive growth of alternative political content on YouTube, and calls into question many of the field’s established narratives. It challenges the popular school of thought that YouTube’s recommendation algorithm is the central factor responsible for radicalizing users and pushing them into a far-right rabbit hole.

The authors say that thesis largely grew out of media reports, and hasn’t been rigorously analyzed. The best prior studies, they say, haven’t been able to prove that YouTube’s algorithm has any noticeable effect. “We think this theory is incomplete, and potentially misleading,” Munger and Phillips argue in the paper. “And we think that it has rapidly gained a place in the center of the study of media and politics on YouTube because it implies an obvious policy solution—one which is flattering to the journalists and academics studying the phenomenon.”

Instead, the paper suggests that radicalization on YouTube stems from the same factors that persuade people to change their minds in real life—injecting new information—but at scale. The authors say the quantity and popularity of alternative (mostly right-wing) political media on YouTube is driven by both supply and demand. The supply has grown because YouTube appeals to right-wing content creators, with its low barrier to entry, easy way to make money, and reliance on video, which is easier to create and more impactful than text.

‘I’ve Got Nothing to Hide’ and Other Misunderstandings of Privacy

Daniel J.Solove in San Diego Law Review:

In this short essay, written for a symposium in the San Diego Law Review, Professor Daniel Solove examines the nothing to hide argument. When asked about government surveillance and data mining, many people respond by declaring: “I’ve got nothing to hide.” According to the nothing to hide argument, there is no threat to privacy unless the government uncovers unlawful activity, in which case a person has no legitimate justification to claim that it remain private. The nothing to hide argument and its variants are quite prevalent, and thus are worth addressing. In this essay, Solove critiques the nothing to hide argument and exposes its faulty underpinnings.

First published in 2007.  Still relevant.

Human speech may have a universal transmission rate: 39 bits per second

Science:

Italians are some of the fastest speakers on the planet, chattering at up to nine syllables per second. Many Germans, on the other hand, are slow enunciators, delivering five to six syllables in the same amount of time. Yet in any given minute, Italians and Germans convey roughly the same amount of information, according to a new study. Indeed, no matter how fast or slowly languages are spoken, they tend to transmit information at about the same rate: 39 bits per second, about twice the speed of Morse code.

“This is pretty solid stuff,” says Bart de Boer, an evolutionary linguist who studies speech production at the Free University of Brussels, but was not involved in the work. Language lovers have long suspected that information-heavy languages—those that pack more information about tense, gender, and speaker into smaller units, for example—move slowly to make up for their density of information, he says, whereas information-light languages such as Italian can gallop along at a much faster pace. But until now, no one had the data to prove it.

Scientists started with written texts from 17 languages, including English, Italian, Japanese, and Vietnamese. They calculated the information density of each language in bits—the same unit that describes how quickly your cellphone, laptop, or computer modem transmits information. They found that Japanese, which has only 643 syllables, had an information density of about 5 bits per syllable, whereas English, with its 6949 syllables, had a density of just over 7 bits per syllable. Vietnamese, with its complex system of six tones (each of which can further differentiate a syllable), topped the charts at 8 bits per syllable.

Next, the researchers spent 3 years recruiting and recording 10 speakers—five men and five women—from 14 of their 17 languages. (They used previous recordings for the other three languages.) Each participant read aloud 15 identical passages that had been translated into their mother tongue. After noting how long the speakers took to get through their readings, the researchers calculated an average speech rate per language, measured in syllables/second.

Some languages were clearly faster than others: no surprise there. But when the researchers took their final step—multiplying this rate by the bit rate to find out how much information moved per second—they were shocked by the consistency of their results. No matter how fast or slow, how simple or complex, each language gravitated toward an average rate of 39.15 bits per second, they report today in Science Advances. In comparison, the world’s first computer modem (which came out in 1959) had a transfer rate of 110 bits per second, and the average home internet connection today has a transfer rate of 100 megabits per second (or 100 million bits).

How social networks can be used to bias votes

Via Charles Arthur, Nature editorial board:

Politicians’ efforts to gerrymander — redraw electoral-constituency boundaries to favour one party — often hit the news. But, as a paper published in Nature this week shows, gerrymandering comes in other forms, too.

The work reveals how connections in a social network can also be gerrymandered — or manipulated — in such a way that a small number of strategically placed bots can influence a larger majority to change its mind, especially if the larger group is undecided about its voting intentions (A. J. Stewart et al. Nature 573, 117–118; 2019).

The researchers, led by mathematical biologist Alexander Stewart of the University of Houston, Texas, have joined those who are showing how it can be possible to give one party a disproportionate influence in a vote.

It is a finding that should concern us all.

A masterful understandment.

Auditing Radicalization Pathways on YouTube

Manoel Horta Ribeiro, Raphael Ottoni, Robert West, Virgílio A. F. Almeida, Wagner Meira at Cornell University:

Non-profits and the media claim there is a radicalization pipeline on YouTube. Its content creators would sponsor fringe ideas, and its recommender system would steer users towards edgier content. Yet, the supporting evidence for this claim is mostly anecdotal, and there are no proper measurements of the influence of YouTube’s recommender system. In this work, we conduct a large scale audit of user radicalization on YouTube. We analyze 331,849 videos of 360 channels which we broadly classify into: control, the Alt-lite, the Intellectual Dark Web (I.D.W.), and the Alt-right —channels in the I.D.W. and the Alt-lite would be gateways to fringe far-right ideology, here represented by Alt-right channels. Processing more than 79comments, we show that the three communities increasingly share the same user base; that users consistently migrate from milder to more extreme content; and that a large percentage of users who consume Alt-right content now consumed Alt-lite and I.D.W. content in the past. We also probe YouTube’s recommendation algorithm, looking at more than 2M million recommendations for videos and channels between May and July 2019. We find that Alt-lite content is easily reachable from I.D.W. channels via recommendations and that Alt-right channels may be reached from both I.D.W. and Alt-lite channels. Overall, we paint a comprehensive picture of user radicalization on YouTube and provide methods to transparently audit the platform and its recommender system.

Google knows this but does nothing.

The Myth of Consumer-Grade Security

Schneier on Security:

The Department of Justice wants access to encrypted consumer devices but promises not to infiltrate business products or affect critical infrastructure. Yet that’s not possible, because there is no longer any difference between those categories of devices. Consumer devices are critical infrastructure. They affect national security. And it would be foolish to weaken them, even at the request of law enforcement.

In his keynote address at the International Conference on Cybersecurity, Attorney General William Barr argued that companies should weaken encryption systems to gain access to consumer devices for criminal investigations. Barr repeated a common fallacy about a difference between military-grade encryption and consumer encryption: “After all, we are not talking about protecting the nation’s nuclear launch codes. Nor are we necessarily talking about the customized encryption used by large business enterprises to protect their operations. We are talking about consumer products and services such as messaging, smart phones, e-mail, and voice and data applications.”

The thing is, that distinction between military and consumer products largely doesn’t exist. All of those “consumer products” Barr wants access to are used by government officials — heads of state, legislators, judges, military commanders and everyone else — worldwide. They’re used by election officials, police at all levels, nuclear power plant operators, CEOs and human rights activists. They’re critical to national security as well as personal security.

This wasn’t true during much of the Cold War. Before the Internet revolution, military-grade electronics were different from consumer-grade. Military contracts drove innovation in many areas, and those sectors got the cool new stuff first. That started to change in the 1980s, when consumer electronics started to become the place where innovation happened. The military responded by creating a category of military hardware called COTS: commercial off-the-shelf technology. More consumer products became approved for military applications. Today, pretty much everything that doesn’t have to be hardened for battle is COTS and is the exact same product purchased by consumers. And a lot of battle-hardened technologies are the same computer hardware and software products as the commercial items, but in sturdier packaging.

My emphasis.

YouTube’s Susan Wojcicki: ‘Where’s the line of free speech – are you removing voices that should be heard?’

The Guardian:

The day before we meet, the tech site Gizmodo publishes a piece on how extremist channels remain on YouTube, despite the new policies. In the face of fairly constant criticism, does Wojcicki ever feel like walking away? “No, I don’t. Because I feel a commitment to solving these challenges,” she says. “I care about the legacy that we leave and about how history will view this point in time. Here’s this new technology, we’ve enabled all these new voices. What did we do? Did we decide to shut it down and say only a small set of people will have their voice? Who will decide that, and how will it be decided? Or do we find a way to enable all these different voices and perspectives, but find a way to manage the abuse of it? I’m focused on making sure we can manage the challenges of having an open platform in a responsible way.”

My emphasis.   Her job depends upon her denying that there is a difference between merely uploading a video (and it being lost in the millions of others) and deliberately recommending it to others.   The YouTube recommendation algorithm is simply toxic.  And, like polluters everywhere, they do it because it makes them money.