Category: Technology

“Zoom in. Now… enhance.”

Via Charles Arthur, David Garcia’s work on image super-resolution through deep learning is astounding:

Image super-resolution through deep learning. This project uses deep learning to upscale 16×16 images by a 4x factor. The resulting 64×64 images display sharp features that are plausible based on the dataset that was used to train the neural net.

Here’s an random, non cherry-picked, example of what this network can do. From left to right, the first column is the 16×16 input image, the second one is what you would get from a standard bicubic interpolation, the third is the output generated by the neural net, and on the right is the ground truth.

srez_sample_output

As you can see, the network is able to produce a very plausible reconstruction of the original face. As the dataset is mainly composed of well-illuminated faces looking straight ahead, the reconstruction is poorer when the face is at an angle, poorly illuminated, or partially occluded by eyeglasses or hands.

The headline is a reference to the film & TV trope which suddenly seems a lot more plausible.

Algorithmic bias

Seattle Times:

Search for a female contact on LinkedIn, and you may get a curious result. The professional networking website asks if you meant to search for a similar-looking man’s name.

A search for “Stephanie Williams,” for example, brings up a prompt asking if the searcher meant to type “Stephen Williams” instead.

It’s not that there aren’t any people by that name — about 2,500 profiles included Stephanie Williams.

But similar searches of popular female first names, paired with placeholder last names, bring up LinkedIn’s suggestion to change “Andrea Jones” to “Andrew Jones,” Danielle to Daniel, Michaela to Michael and Alexa to Alex.
The pattern repeats for at least a dozen of the most common female names in the U.S.
Searches for the 100 most common male names in the U.S., on the other hand, bring up no prompts asking if users meant predominantly female names.
LinkedIn says its suggested results are generated automatically by an analysis of the tendencies of past searchers. “It’s all based on how people are using the platform,” said spokeswoman Suzi Owens.

One company that’s risen to the bias challenge is neighborhood social network Nextdoor.  Wired explains how they’ve cut racial profiling simply by prompting users for other information e.g. what was the person wearing?  The Guardian explores why companies such as Airbnb don’t implement the same checks.

 

 

Microsoft Secure Boot Key Leak Shows Why Backdoors Can’t Work

Wired:

APPLE’S REFUSAL TO comply with a court order to help the FBI crack an iPhone highlighted the pressure tech companies face to include backdoors in their software. This “new crypto war” pits public safety concerns against the argument that backdoors and robust security are mutually exclusive. A seemingly innocuous Windows feature designed to protect users underscores that point.

Two hackers published evidence on Tuesday showing that attackers can exploit a feature called Secure Boot and install the type of malicious software the feature was created to protect against. “You can see the irony,” the researchers, known by the handles Slipstream and MY123, wrote.

Secure Boot, which first appeared in Windows 8 , bars computers from loading malware by confirming that software coordinating the operating system launch is trusted and verified. This ensures a computer isn’t tricked by a malicious program that then assumes control. Microsoft included a workaround so developers could test their software without fully validating it. It was never meant for hackers or police, but it is a backdoor just the same. And the keys leaked online.

Hopefully this will kill the crazy idea that a backdoor past encryption is a good idea.  If Microsoft can’t keep control of a key, who can?

EU regulations on algorithmic decision-making and a “right to explanation”

Interesting paper:

We summarize the potential impact that the European Union’s new General Data Protection Regulation will have on the routine use of machine learning algorithms. Slated to take effect as law across the EU in 2018, it will restrict automated individual decision-making (that is, algorithms that make decisions based on user-level predictors) which “significantly affect” users. The law will also create a “right to explanation,” whereby a user can ask for an explanation of an algorithmic decision that was made about them. We argue that while this law will pose large challenges for industry, it highlights opportunities for machine learning researchers to take the lead in designing algorithms and evaluation frameworks which avoid discrimination.

 

The problem with self-driving cars: who controls the code?

The Guardian:

The Trolley Problem is an ethical brainteaser that’s been entertaining philosophers since it was posed by Philippa Foot in 1967:

A runaway train will slaughter five innocents tied to its track unless you pull a lever to switch it to a siding on which one man, also innocent and unawares, is standing. Pull the lever, you save the five, but kill the one: what is the ethical course of action?

The problem has run many variants over time, including ones in which you have to choose between a trolley killing five innocents or personally shoving a man who is fat enough to stop the train (but not to survive the impact) into its path; a variant in which the fat man is the villain who tied the innocents to the track in the first place, and so on.

Now it’s found a fresh life in the debate over autonomous vehicles. The new variant goes like this: your self-driving car realizes that it can either divert itself in a way that will kill you and save, say, a busload of children; or it can plow on and save you, but the kids all die. What should it be programmed to do?

I can’t count the number of times I’ve heard this question posed as chin-stroking, far-seeing futurism, and it never fails to infuriate me. Bad enough that this formulation is a shallow problem masquerading as deep, but worse still is the way in which this formulation masks a deeper, more significant one.

Here’s a different way of thinking about this problem: if you wanted to design a car that intentionally murdered its driver under certain circumstances, how would you make sure that the driver never altered its programming so that they could be assured that their property would never intentionally murder them?