Reports of satellite navigation problems in the Black Sea suggest that Russia may be testing a new system for spoofing GPS, New Scientist has learned. This could be the first hint of a new form of electronic warfare available to everyone from rogue nation states to petty criminals.
On 22 June, the US Maritime Administration filed a seemingly bland incident report. The master of a ship off the Russian port of Novorossiysk had discovered his GPS put him in the wrong spot – more than 32 kilometres inland, at Gelendzhik Airport.
After checking the navigation equipment was working properly, the captain contacted other nearby ships. Their AIS traces – signals from the automatic identification system used to track vessels – placed them all at the same airport. At least 20 ships were affected.
While the incident is not yet confirmed, experts think this is the first documented use of GPS misdirection – a spoofing attack that has long been warned of but never been seen in the wild.
Until now, the biggest worry for GPS has been it can be jammed by masking the GPS satellite signal with noise. While this can cause chaos, it is also easy to detect. GPS receivers sound an alarm when they lose the signal due to jamming. Spoofing is more insidious: a false signal from a ground station simply confuses a satellite receiver. “Jamming just causes the receiver to die, spoofing causes the receiver to lie,” says consultant David Last, former president of the UK’s Royal Institute of Navigation.
Todd Humphreys, of the University of Texas at Austin, has been warning of the coming danger of GPS spoofing for many years. In 2013, he showed how a superyacht with state-of-the-art navigation could be lured off-course by GPS spoofing. “The receiver’s behaviour in the Black Sea incident was much like during the controlled attacks my team conducted,” says Humphreys.
Image super-resolution through deep learning. This project uses deep learning to upscale 16×16 images by a 4x factor. The resulting 64×64 images display sharp features that are plausible based on the dataset that was used to train the neural net.
Here’s an random, non cherry-picked, example of what this network can do. From left to right, the first column is the 16×16 input image, the second one is what you would get from a standard bicubic interpolation, the third is the output generated by the neural net, and on the right is the ground truth.
As you can see, the network is able to produce a very plausible reconstruction of the original face. As the dataset is mainly composed of well-illuminated faces looking straight ahead, the reconstruction is poorer when the face is at an angle, poorly illuminated, or partially occluded by eyeglasses or hands.
The headline is a reference to the film & TV trope which suddenly seems a lot more plausible.
Search for a female contact on LinkedIn, and you may get a curious result. The professional networking website asks if you meant to search for a similar-looking man’s name.
A search for “Stephanie Williams,” for example, brings up a prompt asking if the searcher meant to type “Stephen Williams” instead.
It’s not that there aren’t any people by that name — about 2,500 profiles included Stephanie Williams.
One company that’s risen to the bias challenge is neighborhood social network Nextdoor. Wired explains how they’ve cut racial profiling simply by prompting users for other information e.g. what was the person wearing? The Guardian explores why companies such as Airbnb don’t implement the same checks.
APPLE’S REFUSAL TO comply with a court order to help the FBI crack an iPhone highlighted the pressure tech companies face to include backdoors in their software. This “new crypto war” pits public safety concerns against the argument that backdoors and robust security are mutually exclusive. A seemingly innocuous Windows feature designed to protect users underscores that point.
Two hackers published evidence on Tuesday showing that attackers can exploit a feature called Secure Boot and install the type of malicious software the feature was created to protect against. “You can see the irony,” the researchers, known by the handles Slipstream and MY123, wrote.
Secure Boot, which first appeared in Windows 8 , bars computers from loading malware by confirming that software coordinating the operating system launch is trusted and verified. This ensures a computer isn’t tricked by a malicious program that then assumes control. Microsoft included a workaround so developers could test their software without fully validating it. It was never meant for hackers or police, but it is a backdoor just the same. And the keys leaked online.
Hopefully this will kill the crazy idea that a backdoor past encryption is a good idea. If Microsoft can’t keep control of a key, who can?
You need a long, long list of concepts such as ConceptNet.
We summarize the potential impact that the European Union’s new General Data Protection Regulation will have on the routine use of machine learning algorithms. Slated to take effect as law across the EU in 2018, it will restrict automated individual decision-making (that is, algorithms that make decisions based on user-level predictors) which “significantly affect” users. The law will also create a “right to explanation,” whereby a user can ask for an explanation of an algorithmic decision that was made about them. We argue that while this law will pose large challenges for industry, it highlights opportunities for machine learning researchers to take the lead in designing algorithms and evaluation frameworks which avoid discrimination.
CESG, the Information Security Arm of GCHQ, offers advice about passwords.
The Trolley Problem is an ethical brainteaser that’s been entertaining philosophers since it was posed by Philippa Foot in 1967:
A runaway train will slaughter five innocents tied to its track unless you pull a lever to switch it to a siding on which one man, also innocent and unawares, is standing. Pull the lever, you save the five, but kill the one: what is the ethical course of action?
The problem has run many variants over time, including ones in which you have to choose between a trolley killing five innocents or personally shoving a man who is fat enough to stop the train (but not to survive the impact) into its path; a variant in which the fat man is the villain who tied the innocents to the track in the first place, and so on.
Now it’s found a fresh life in the debate over autonomous vehicles. The new variant goes like this: your self-driving car realizes that it can either divert itself in a way that will kill you and save, say, a busload of children; or it can plow on and save you, but the kids all die. What should it be programmed to do?
I can’t count the number of times I’ve heard this question posed as chin-stroking, far-seeing futurism, and it never fails to infuriate me. Bad enough that this formulation is a shallow problem masquerading as deep, but worse still is the way in which this formulation masks a deeper, more significant one.
Here’s a different way of thinking about this problem: if you wanted to design a car that intentionally murdered its driver under certain circumstances, how would you make sure that the driver never altered its programming so that they could be assured that their property would never intentionally murder them?