When Paul Kay, then an anthropology graduate student at Harvard University, arrived in Tahiti in 1959 to study island life, he expected to have a hard time learning the local words for colors. His field had long espoused a theory called linguistic relativity, which held that language shapes perception. Color was the “parade example,” Kay says. His professors and textbooks taught that people could only recognize a color as categorically distinct from others if they had a word for it. If you knew only three color words, a rainbow would have only three stripes. Blue wouldn’t stand out as blue if you couldn’t name it.
What’s more, according to the relativist view, color categories were arbitrary. The spectrum of color has no intrinsic organization. Scientists had no reason to suspect that cultures divvied it up in similar ways. To an English speaker like Kay, the category “red” might include shades ranging from deep wine to light ruby. But to Tahitians, maybe “red” also included shades that Kay would call “orange” or “purple.” Or maybe Tahitians chunked colors not by a combination of hue, lightness and saturation, as Americans do, but by material qualities, like texture or sheen.
To his surprise, however, Kay found it easy to understand colors in Tahitian. The language had fewer color terms than English. For example, only one word, ninamu, translated to both green and blue (now known as grue). But most Tahitian colors mapped astonishingly well to categories that Kay already knew intuitively, including white, black, red, and yellow. It was strange, he thought, that the groupings weren’t more random.
In a stunning discovery that overturns decades of textbook teaching, researchers at the University of Virginia School of Medicine have determined that the brain is directly connected to the immune system by vessels previously thought not to exist. That such vessels could have escaped detection when the lymphatic system has been so thoroughly mapped throughout the body is surprising on its own, but the true significance of the discovery lies in the effects it could have on the study and treatment of neurological diseases ranging from autism to Alzheimer’s disease to multiple sclerosis.
“Instead of asking, ‘How do we study the immune response of the brain?’ ‘Why do multiple sclerosis patients have the immune attacks?’ now we can approach this mechanistically. Because the brain is like every other tissue connected to the peripheral immune system through meningeal lymphatic vessels,” said Jonathan Kipnis, PhD, professor in the UVA Department of Neuroscience and director of UVA’s Center for Brain Immunology and Glia (BIG). “It changes entirely the way we perceive the neuro-immune interaction. We always perceived it before as something esoteric that can’t be studied. But now we can ask mechanistic questions.”
Based on epidemiological studies, Ian Morgan, a myopia researcher at the Australian National University in Canberra, estimates that children need to spend around three hours per day under light levels of at least 10,000 lux to be protected against myopia. This is about the level experienced by someone under a shady tree, wearing sunglasses, on a bright summer day. (An overcast day can provide less than 10,000 lux and a well-lit office or classroom is usually no more than 500 lux.) Three or more hours of daily outdoor time is already the norm for children in Morgan’s native Australia, where only around 30% of 17-year-olds are myopic. But in many parts of the world — including the United States, Europe and East Asia — children are often outside for only one or two hours.
In 2009, Morgan set out to test whether boosting outdoor time would help to protect the eyesight of Chinese children. He and a team from the Zhongshan Ophthalmic Center (where Morgan also works) launched a three-year trial in which they added a 40-minute outdoor class to the end of the school day for a group of six- and seven-year-olds at six randomly selected schools in Guangzhou; children at six other schools had no change in schedule and served as controls. Of the 900-plus children who attended the outside class, 30% developed myopia by age nine or ten compared with 40% of those at the control schools. The study is being prepared for publication.
A stronger effect was found at a school in southern Taiwan, where teachers were asked to send children outside for all 80 minutes of their daily break time instead of giving them the choice to stay inside. After one year, doctors had diagnosed myopia in 8% of the children, compared with 18% at a nearby school.
The standard interpretation of entanglement is that there is some kind of instant communication happening between the two particles. Any communication between them would have to travel the intervening distance instantaneously—that is, infinitely fast. That is plainly faster than light, a speed of communication prohibited by the theory of relativity. According to Einstein, nothing at all should be able to do that, leading him to think that some new physics must be operating, beyond the scope of quantum mechanics itself.
Suppose it is not the case that the particles (or dice) communicate instantaneously with each other, and it is also not the case that their values were fixed in advance. There seem to be no options remaining. But here Price asks us to consider the impossible: that doing something to either of the entangled particles causes effects which travel backward in time to the point in the past when the two particles were close together and interacting strongly. At that point, information from the future is exchanged, each particle alters the behavior of its partner, and these effects then carry forward into the future again. There is no need for instantaneous communication, and no violation of relativity.
“The history of science is full of cases where people thought a phenomenon wasutterly unique, that there couldn’t be any possible mechanism for it, that we might never solve it, that there was nothing in the universe like it,” said Patricia Churchland of the University of California, a self-described “neurophilosopher” and one of Chalmers’s most forthright critics. Churchland’s opinion of the Hard Problem, which she expresses in caustic vocal italics, is that it is nonsense, kept alive by philosophers who fear that science might be about to eliminate one of the puzzles that has kept them gainfully employed for years. Look at the precedents: in the 17th century, scholars were convinced that light couldn’t possibly be physical – that it had to be something occult, beyond the usual laws of nature. Or take life itself: early scientists were convinced that there had to be some magical spirit – the élan vital – that distinguished living beings from mere machines. But there wasn’t, of course. Light is electromagnetic radiation; life is just the label we give to certain kinds of objects that can grow and reproduce. Eventually, neuroscience will show that consciousness is just brain states. Churchland said: “The history of science really gives you perspective on how easy it is to talk ourselves into this sort of thinking – that if my big, wonderful brain can’t envisage the solution, then it must be a really, really hard problem!”
Is the natural world creative? Just take a look around it. Look at the brilliant plumage of tropical birds, the diverse pattern and shape of leaves, the cunning stratagems of microbes, the dazzling profusion of climbing, crawling, flying, swimming things. Look at the “grandeur” of life, the “endless forms most beautiful and most wonderful,” as Darwin put it. Isn’t that enough to persuade you?
Ah, but isn’t all this wonder simply the product of the blind fumbling of Darwinian evolution, that mindless machine which takes random variation and sieves it by natural selection? Well, not quite. You don’t have to be a benighted creationist, nor even a believer in divine providence, to argue that Darwin’s astonishing theory doesn’t fully explain why nature is so marvelously, endlessly inventive. “Darwin’s theory surely is the most important intellectual achievement of his time, perhaps of all time,” says evolutionary biologist Andreas Wagner of the University of Zurich. “But the biggest mystery about evolution eluded his theory. And he couldn’t even get close to solving it.”
What Wagner is talking about is how evolution innovates: as he puts it, “how the living world creates.” Natural selection supplies an incredibly powerful way of pruning variation into effective solutions to the challenges of the environment. But it can’t explain where all that variation came from. As the biologist Hugo de Vries wrote in 1905, “natural selection may explain the survival of the fittest, but it cannot explain the arrival of the fittest.” Over the past several years, Wagner and a handful of others have been starting to understand the origins of evolutionary innovation. Thanks to their findings so far, we can now see not only how Darwinian evolution works but why it works: what makes it possible.
Using the latest deep-learning protocols, computer models consisting of networks of artificial neurons are becoming increasingly adept at image, speech and pattern recognition — core technologies in robotic personal assistants, complex data analysis and self-driving cars. But for all their progress training computers to pick out salient features from other, irrelevant bits of data, researchers have never fully understood why the algorithms or biological learning work.
Now, two physicists have shown that one form of deep learning works exactly like one of the most important and ubiquitous mathematical techniques in physics, a procedure for calculating the large-scale behavior of physical systems such as elementary particles, fluids and the cosmos.
The new work, completed by Pankaj Mehta of Boston University and David Schwab of Northwestern University, demonstrates that a statistical technique called “renormalization,” which allows physicists to accurately describe systems without knowing the exact state of all their component parts, also enables the artificial neural networks to categorize data as, say, “a cat” regardless of its color, size or posture in a given video.
“They actually wrote down on paper, with exact proofs, something that people only dreamed existed,” said Ilya Nemenman, a biophysicist at Emory University. “Extracting relevant features in the context of statistical physics and extracting relevant features in the context of deep learning are not just similar words, they are one and the same.”
“Two pizzas sitting on top of a stove top oven”
“A group of people shopping at an outdoor market”
“Best seats in the house”
People can summarize a complex scene in a few words without thinking twice. It’s much more difficult for computers. But we’ve just gotten a bit closer — we’ve developed a machine-learning system that can automatically produce captions (like the three above) to accurately describe images the first time it sees them. This kind of system could eventually help visually impaired people understand pictures, provide alternate text for images in parts of the world where mobile connections are slow, and make it easier for everyone to search on Google for images.
The simple fact of the matter is that going grocery shopping isn’t—and never was—as simple as you imagined, whether you’re on your own for the first time, or you’ve been shopping for a family of eight for 20 years.
Sometimes it seems less like you’re going out to buy milk and bread than you’re buffeted by endless marketing, too many choices, and not enough information. Does the perky green label mean that this box of cereal is good for me? Are there certain expiration dates that are less important than others? Am I a bad mom if I buy frozen spinach for dinner? How do I know what kind of fish to buy? Am I right to be a little scared of the butcher? And how did I end up spending $150 if all I went in for was some milk and bread?
This idea that nature is inherently probabilistic — that particles have no hard properties, only likelihoods, until they are observed — is directly implied by the standard equations of quantum mechanics. But now a set of surprising experiments with fluids has revived old skepticism about that worldview. The bizarre results are fueling interest in an almost forgotten version of quantum mechanics, one that never gave up the idea of a single, concrete reality.
The experiments involve an oil droplet that bounces along the surface of a liquid. The droplet gently sloshes the liquid with every bounce. At the same time, ripples from past bounces affect its course. The droplet’s interaction with its own ripples, which form what’s known as a pilot wave, causes it to exhibit behaviors previously thought to be peculiar to elementary particles — including behaviors seen as evidence that these particles are spread through space like waves, without any specific location, until they are measured.
Particles at the quantum scale seem to do things that human-scale objects do not do. They can tunnel through barriers, spontaneously arise or annihilate, and occupy discrete energy levels. This new body of research reveals that oil droplets, when guided by pilot waves, also exhibit these quantum-like features.
To some researchers, the experiments suggest that quantum objects are as definite as droplets, and that they too are guided by pilot waves — in this case, fluid-like undulations in space and time. These arguments have injected new life into a deterministic (as opposed to probabilistic) theory of the microscopic world first proposed, and rejected, at the birth of quantum mechanics.
“This is a classical system that exhibits behavior that people previously thought was exclusive to the quantum realm, and we can say why,” said John Bush, a professor of applied mathematics at the Massachusetts Institute of Technology who has led several recent bouncing-droplet experiments. “The more things we understand and can provide a physical rationale for, the more difficult it will be to defend the ‘quantum mechanics is magic’ perspective.”