Bloomberg looks at digital hucksters and their symbiotic relationship with Facebook: “They go out and find the morons for me.”
Mentioned just because the headline alone gives one hope re the Facebook fiasco.
From Evan Puschak, a quick video on dark patterns, UI design that tricks users into doing things they might not want to do. For instance, as he shows in the video, the hoops you need to jump through to delete your Amazon account are astounding; it’s buried levels deep in a place no one would ever think to look. This dark pattern is called a roach motel — users check in but they don’t check out.
The Onion on Zuckerberg’s recent public appearances.
From crash not accident:
Before the labor movement, factory owners would say “it was an accident” when American workers were injured in unsafe conditions.
Before the movement to combat drunk driving, intoxicated drivers would say “it was an accident” when they crashed their cars.
Planes don’t have accidents. They crash. Cranes don’t have accidents. They collapse. And as a society, we expect answers and solutions.
Traffic crashes are fixable problems, caused by dangerous streets and unsafe drivers. They are not accidents. Let’s stop using the word “accident” today.
François Chollet on Twitter:
If Facebook gets to decide, over the span of many years, which news you will see (real or fake), whose political status updates you’ll see, and who will see yours, then Facebook is in effect in control of your political beliefs and your worldview.
This is not quite news, as Facebook has been known to run since at least 2013 a series of experiments in which they were able to successfully control the moods and decisions of unwitting users by tuning their newsfeeds’ contents, as well as prediction user’s future decisions.
In short, Facebook can simultaneously measure everything about us, and control the information we consume. When you have access to both perception and action, you’re looking at an AI problem. You can start establishing an optimization loop for human behavior. A RL loop.
A loop in which you observe the current state of your targets and keep tuning what information you feed them, until you start observing the opinions and behaviors you wanted to see.
A good chunk of the field of AI research (especially the bits that Facebook has been investing in) is about developing algorithms to solve such optimization problems as efficiently as possible, to close the loop and achieve full control of the phenomenon at hand. In this case, us.
This is made all the easier by the fact that the human mind is highly vulnerable to simple patterns of social manipulation. While thinking about these issues, I have compiled a short list of psychological attack patterns that would be devastatingly effective.
Some of them have been used for a long time in advertising (e.g. positive/negative social reinforcement), but in a very weak, un-targeted form. From an information security perspective, you would call these “vulnerabilities”: known exploits that can be used to take over a system.
In the case of the human mind, these vulnerabilities never get patched, they are just the way we work. They’re in our DNA. They’re our psychology. On a personal level, we have no practical way to defend ourselves against them.
The human mind is a static, vulnerable system that will come increasingly under attack from ever-smarter AI algorithms that will simultaneously have a complete view of everything we do and believe, and complete control of the information we consume.
Importantly, mass population control — in particular political control — arising from placing AI algorithms in charge of our information diet does not necessarily require very advanced AI. You don’t need self-aware, superintelligent AI for this to be a dire threat.
So, if mass population control is already possible today — in theory — why hasn’t the world ended yet? In short, I think it’s because we’re really bad at AI. But that may be about to change. You see, our technical capabilities are the bottleneck here.
Now that Alexander Nix has been suspended as Cambridge Analytica chief executive, the hunt is on to see who else he has been meeting – in London or Washington. His meetings with UK officials would have been disclosed. But one wasn’t: a meeting with Boris Johnson in December 2016. The Foreign Secretary wasn’t seeking the algorithm that took Trump to victory – his objective was to try to learn about, and improve links with, Team Trump. And here was a Brit who, apparently, was a close part of that team.
Boris and Nix met on the advice of Foreign Office officials, at a time when Britain was scrambling for routes into the Trump administration. Nix had been deftly promoting himself as someone who had all sorts of connections in Trumpworld. The meeting lasted for twenty minutes – there were officials present, so presumably there was no talk of Ukrainian honeytraps. I’m told that Boris was ‘unimpressed’ by Nix: after all, he’d know an Etonian blowhard when he saw one.
Deftly done – leak the meeting before it is uncovered (it was obviously an innocent oversight that it wasn’t disclosed before) and, as a bonus, claim Johnson was unimpressed by Nix, thus disparaging anything else Nix might have said (say to a Channel 4 undercover reporter). But I do like the line “he’d know an Etonian blowhard when he saw one”.