The fallacy of obviousness

Aeon:

So if the gorilla experiment doesn’t illustrate that humans are blind to the obvious, then what exactly does it illustrate? What’s an alternative interpretation, and what does it tell us about perception, cognition and the human mind?

The alternative interpretation says that what people are looking for – rather than what people are merely looking at – determines what is obvious. Obviousness is not self-evident. Or as Sherlock Holmes said: ‘There is nothing more deceptive than an obvious fact.’ This isn’t an argument against facts or for ‘alternative facts’, or anything of the sort. It’s an argument about what qualifies as obvious, why and how. See, obviousness depends on what is deemed to be relevant for a particular question or task at hand. Rather than passively accounting for or recording everything directly in front of us, humans – and other organisms for that matter – instead actively look for things. The implication (contrary to psychophysics) is that mind-to-world processes drive perception rather than world-to-mind processes. The gorilla experiment itself can be reinterpreted to support this view of perception, showing that what we see depends on our expectations and questions – what we are looking for, what question we are trying to answer.

At first glance that might seem like a rather mundane interpretation, particularly when compared with the startling claim that humans are ‘blind to the obvious’. But it’s more radical than it might seem. This interpretation of the gorilla experiment puts humans centre-stage in perception, rather than relegating them to passively recording their surroundings and environments. It says that what we see is not so much a function of what is directly in front of us (Kahneman’s natural assessments), or what one is in camera-like fashion recording or passively looking at, but rather determined by what we have in our minds, for example, by the questions we have in mind. People miss the gorilla not because they are blind, but because they were prompted – in this case, by the scientists themselves – to pay attention to something else. The question – ‘How many basketball passes’ (just like any question: ‘Where are my keys?’) – primes us to see certain aspects of a visual scene, at the expense of any number of other things.

The biologist Jakob von Uexküll (1864-1944) argued that all species, humans included, have a unique ‘Suchbild’ – German for a seek- or search-image – of what they are looking for. In the case of humans, this search-image includes the questions, expectations, problems, hunches or theories that we have in mind, which in turn structure and direct our awareness and attention. The important point is that humans do not observe scenes passively or neutrally. In 1966, the philosopher Karl Popper conducted an informal experiment to make this point. During a lecture at the University of Oxford, he turned to his audience and said: ‘My experiment consists of asking you to observe, here and now. I hope you are all cooperating and observing! However, I feel that at least some of you, instead of observing, will feel a strong urge to ask: “What do you want me to observe?”’ Then Popper delivered his insight about observation: ‘For what I am trying to illustrate is that, in order to observe, we must have in mind a definite question, which we might be able to decide by observation.’

In other words, there is no neutral observation. The world doesn’t tell us what is relevant. Instead, it responds to questions. When looking and observing, we are usually directed toward something, toward answering specific questions or satisfying some curiosities or problems. ‘All observation must be for or against a point of view,’ is how Charles Darwin put it in 1861. Similarly, the art historian Ernst Gombrich in 1956 emphasised the role of the ‘beholder’s share’ in observation and perception.

 

 

Fooling online users with dark patterns

Kottle:

From Evan Puschak, a quick video on dark patterns, UI design that tricks users into doing things they might not want to do. For instance, as he shows in the video, the hoops you need to jump through to delete your Amazon account are astounding; it’s buried levels deep in a place no one would ever think to look. This dark pattern is called a roach motel — users check in but they don’t check out.

Car crash not car “accident”

From crash not accident:

Before the labor movement, factory owners would say “it was an accident” when American workers were injured in unsafe conditions.

Before the movement to combat drunk driving, intoxicated drivers would say “it was an accident” when they crashed their cars.

Planes don’t have accidents. They crash. Cranes don’t have accidents. They collapse. And as a society, we expect answers and solutions.

Traffic crashes are fixable problems, caused by dangerous streets and unsafe drivers. They are not accidents. Let’s stop using the word “accident” today.

Language matters.

Facebook’s technical capabilities are the bottleneck to dystopia

François Chollet on Twitter:

If Facebook gets to decide, over the span of many years, which news you will see (real or fake), whose political status updates you’ll see, and who will see yours, then Facebook is in effect in control of your political beliefs and your worldview.

This is not quite news, as Facebook has been known to run since at least 2013 a series of experiments in which they were able to successfully control the moods and decisions of unwitting users by tuning their newsfeeds’ contents, as well as prediction user’s future decisions.

In short, Facebook can simultaneously measure everything about us, and control the information we consume. When you have access to both perception and action, you’re looking at an AI problem. You can start establishing an optimization loop for human behavior. A RL loop.

A loop in which you observe the current state of your targets and keep tuning what information you feed them, until you start observing the opinions and behaviors you wanted to see.

A good chunk of the field of AI research (especially the bits that Facebook has been investing in) is about developing algorithms to solve such optimization problems as efficiently as possible, to close the loop and achieve full control of the phenomenon at hand. In this case, us.

This is made all the easier by the fact that the human mind is highly vulnerable to simple patterns of social manipulation. While thinking about these issues, I have compiled a short list of psychological attack patterns that would be devastatingly effective.

Some of them have been used for a long time in advertising (e.g. positive/negative social reinforcement), but in a very weak, un-targeted form. From an information security perspective, you would call these “vulnerabilities”: known exploits that can be used to take over a system.

In the case of the human mind, these vulnerabilities never get patched, they are just the way we work. They’re in our DNA. They’re our psychology. On a personal level, we have no practical way to defend ourselves against them.

The human mind is a static, vulnerable system that will come increasingly under attack from ever-smarter AI algorithms that will simultaneously have a complete view of everything we do and believe, and complete control of the information we consume.

Importantly, mass population control — in particular political control — arising from placing AI algorithms in charge of our information diet does not necessarily require very advanced AI. You don’t need self-aware, superintelligent AI for this to be a dire threat.

So, if mass population control is already possible today — in theory — why hasn’t the world ended yet? In short, I think it’s because we’re really bad at AI. But that may be about to change. You see, our technical capabilities are the bottleneck here.