This ‘Demonically Clever’ Backdoor Hides In a Tiny Slice of a computer chip

Wired:

The “demonically clever” feature of the Michigan researchers’ backdoor isn’t just its size, or that it’s hidden in hardware rather than software. It’s that it violates the security industry’s most basic assumptions about a chip’s digital functions and how they might be sabotaged. Instead of a mere change to the “digital” properties of a chip—a tweak to the chip’s logical computing functions—the researchers describe their backdoor as an “analog” one: a physical hack that takes advantage of how the actual electricity flowing through the chip’s transistors can be hijacked to trigger an unexpected outcome. Hence the backdoor’s name: A2, which stands for both Ann Arbor, the city where the University of Michigan is based, and “Analog Attack.”

Here’s how that analog hack works: After the chip is fully designed and ready to be fabricated, a saboteur adds a single component to its “mask,” the blueprint that governs its layout. That single component or “cell”—of which there are hundreds of millions or even billions on a modern chip—is made out of the same basic building blocks as the rest of the processor: wires and transistors that act as the on-or-off switches that govern the chip’s logical functions. But this cell is secretly designed to act as a capacitor, a component that temporarily stores electric charge.

Every time a malicious program—say, a script on a website you visit—runs a certain, obscure command, that capacitor cell “steals” a tiny amount of electric charge and stores it in the cell’s wires without otherwise affecting the chip’s functions. With every repetition of that command, the capacitor gains a little more charge. Only after the “trigger” command is sent many thousands of times does that charge hit a threshold where the cell switches on a logical function in the processor to give a malicious program the full operating system access it wasn’t intended to have. “It takes an attacker doing these strange, infrequent events in high frequency for a duration of time,” says Austin. “And then finally the system shifts into a privileged state that lets the attacker do whatever they want.”

That capacitor-based trigger design means it’s nearly impossible for anyone testing the chip’s security to stumble on the long, obscure series of commands to “open” the backdoor. And over time, the capacitor also leaks out its charge again, closing the backdoor so that it’s even harder for any auditor to find the vulnerability.

Seriously sneaky.

Facebook accused of introducing extremists to one another through ‘suggested friends’ feature

Via Charles Arthur, The Telegraph:

Researchers, who analysed the Facebook activities of a thousand Isil supporters in 96 countries, discovered users with radical Islamist sympathies were routinely introduced to one another through the popular ‘suggested friends’ feature.

Using sophisticated algorithms, Facebook is designed to connect people who share common interests.

The site automatically collects a vast amount of personal information about its users, which is then used to target advertisements and also direct people towards others on the network they might wish to connect with.

But without effective checks on what information is being shared, terrorists are able to exploit the site to contact and communicate with sympathisers and supporters.

The extent to which the ‘suggested friend’ feature is helping Isil members on Facebook is highlighted in a new study, the findings of which will be published later this month in an extensive report by the Counter Extremism Project a non profit that has called on tech companies to do more to remove known extremist and terrorist material online.

Gregory Waters, one of the authors of the report, described how he was bombarded by suggestions for pro-Isil friends, after making contact with one active extremist on the site.

Even more concerning was the response his fellow researcher, Robert Postings, got when he clicked on several non-extremist news pages about an Islamist uprising in the Philippines. Within hours he had been inundated with friend suggestions for dozens of extremists based in that region.

 

Building successful online communities: Evidence-based social design

Via Charles Arthur, ACA Wiki:

The authors also suggest that ascribing blame or community sanctions may be less effective than offering community members a way to “save face” “without having to admit that they deliberately violated the community’s norms.” They describe a system called stopit designed at MIT to address computer-based harassment. When users reported harassment, the system sent a message to the alleged harasser claiming that the alleged harasser’s account may have been compromised and urging them to change their password. Here is the rationale given by Gregory Jackson, the Director of Academic Computing at MIT in 1994:

recipients virtually never repeat the offending behavior. This is important: even though recipients concede no guilt, and receive no punishment, they stop. [this system has] drastically reduced the number of confrontational debates between us and perpetrators, while at the same time reducing the recurrence of misbehavior. When we accuse perpetrators directly, they often assert that their misbehavior was within their rights (which may well be true). They then repeat the misbehavior to make their point and challenge our authority. When we let them save face by pretending (if only to themselves) that they did not do what they did, they tend to become more responsible citizens with their pride intact.

How Facebook gives an asymmetric advantage to negative messaging

Techcrunch:

Few Facebook  critics are as credible as Roger McNamee, the managing partner at Elevation Partners. As an early investor in Facebook, McNamee was not only a mentor to Mark Zuckerberg but also introduced him to Sheryl Sandberg.

So it’s hard to underestimate the significance of McNamee’s increasingly public criticism of Facebook over the last couple of years, particularly in the light of the growing Cambridge Analytica storm.

According to McNamee, Facebook pioneered the building of a tech company on “human emotions”. Given that the social network  knows all of our “emotional hot buttons”, McNamee believes, there is “something systemic” about the way that third parties can “destabilize” our democracies and economies. McNamee saw this in 2016 with both the Brexit referendum in the UK and the American Presidential election and concluded that Facebook does, indeed, give “asymmetric advantage” to negative messages.

When algorithms surprise us

AI Weirdness:

When machine learning algorithms solve problems in unexpected ways, programmers find them, okay yes, annoying sometimes, but often purely delightful.

So delightful, in fact, that in 2018 a group of researchers wrote a fascinating paper that collected dozens of anecdotes that “elicited surprise and wonder from the researchers studying them”. The paper is well worth reading, as are the original references, but here are several of my favorite examples.

Floating-point rounding errors as an energy source: In one simulation, robots learned that small rounding errors in the math that calculated forces meant that they got a tiny bit of extra energy with motion. They learned to twitch rapidly, generating lots of free energy that they could harness. The programmer noticed the problem when the robots started swimming extraordinarily fast.

Harvesting energy from crashing into the floor: Another simulation had some problems with its collision detection math that robots learned to use. If they managed to glitch themselves into the floor (they first learned to manipulate time to make this possible), the collision detection would realize they weren’t supposed to be in the floor and would shoot them upward. The robots learned to vibrate rapidly against the floor, colliding repeatedly with it to generate extra energy.

 

Fooling online users with dark patterns

Kottle:

From Evan Puschak, a quick video on dark patterns, UI design that tricks users into doing things they might not want to do. For instance, as he shows in the video, the hoops you need to jump through to delete your Amazon account are astounding; it’s buried levels deep in a place no one would ever think to look. This dark pattern is called a roach motel — users check in but they don’t check out.