The problem with self-driving cars: who controls the code?

The Guardian:

The Trolley Problem is an ethical brainteaser that’s been entertaining philosophers since it was posed by Philippa Foot in 1967:

A runaway train will slaughter five innocents tied to its track unless you pull a lever to switch it to a siding on which one man, also innocent and unawares, is standing. Pull the lever, you save the five, but kill the one: what is the ethical course of action?

The problem has run many variants over time, including ones in which you have to choose between a trolley killing five innocents or personally shoving a man who is fat enough to stop the train (but not to survive the impact) into its path; a variant in which the fat man is the villain who tied the innocents to the track in the first place, and so on.

Now it’s found a fresh life in the debate over autonomous vehicles. The new variant goes like this: your self-driving car realizes that it can either divert itself in a way that will kill you and save, say, a busload of children; or it can plow on and save you, but the kids all die. What should it be programmed to do?

I can’t count the number of times I’ve heard this question posed as chin-stroking, far-seeing futurism, and it never fails to infuriate me. Bad enough that this formulation is a shallow problem masquerading as deep, but worse still is the way in which this formulation masks a deeper, more significant one.

Here’s a different way of thinking about this problem: if you wanted to design a car that intentionally murdered its driver under certain circumstances, how would you make sure that the driver never altered its programming so that they could be assured that their property would never intentionally murder them?