Skip to main content

Your autonomous car needs to avoid an accident. Does it swerve left towards two children or right towards 10 adults? Or does it keep going and risk killing you? Tim Green mulls over the moral quandaries of our driverless future…

Imagine if your driverless car had an ethics dial.

You would sit in the vehicle, program the SatNav to your required direction, key in your music choice and set the temperature. Finally, you would decide how selfish you want the car to be. More precisely, you would tell the car whether – in the event of an emergency – to save you or the hapless pedestrians you are speeding towards.

  Sometimes people mount the pavement or break the speed limit to avoid accidents. Should AI do the same? And If a driverless car malfunctions and causes an accident, who’s fault is that? The occupant? The car maker? The software provider?

It’s a fanciful idea. But it’s one that has been discussed as part of a growing debate about the ethical and legal implications of living in an era of autonomous automobiles.

The first conversations about connected cars were mostly technical. Would they be safe? Would they save fuel? However, now that the concept is moving closer to reality, people are thinking more deeply about ethical questions.

Specifically, they are thinking about the ‘trolley problem’.

This is a familiar subject in philosophy textbooks. It asks people to decide what they would do in the following situation: a runaway trolley is speeding down the railway tracks. It is headed straight for five people. But there is a lever. If you pull it, the trolley will re-direct and kill just one person.

Would you do it?

And then it gets more murky. What about if you could throw a fat person in the way of the train to stop it killing five people. Would you do that?

Apply the trolley problem to the driverless car, and you have to ask: what should the car’s AI be programmed to do in such a situation?

Maybe the car’s human ’driver’ should decide. Which brings us back to that ethics dial. Here, the motorist could choose to favour pedestrians if she was alone in the car. But with her kids in the back, she could re-set the dial to prioritise the car’s occupants.

This is just a thought experiment, but it’s already been the subject of research. In one survey, 44 per cent of people said they would like to have control over an ethics setting. 12 per cent said they would want the manufacturer to pre-set it.

It’s all made more complicated by the fact that we might apply different judgements to human drivers and AI. After all, a human might be forgiven for making a fatal decision in a split second. But AI programmers have time to consider their decisions when they are coding the software.

Tricky isn’t it?

It’s fair to say the idea of robots ‘driving’ cars is fraught with moral quandaries. And a lot of commercial ones. At present there are no firm answers. But there are lots of questions such as:

Should driverless cars break the law?

Sometimes people mount the pavement or break the speed limit to avoid accidents. Should AI do the same? What about setting off with a broken light? No? But what if it’s the daytime? Where’s the line?

OK to hit squirrels, but not dogs?

This is related to the trolley question. Should driverless cars have a hierarchy of animals to swerve?

How ‘irresponsible’ can occupants be?

Is it OK to be drunk inside a driverless car?

How incapacitated can occupants be?

Can a blind person be in control of an autonomous vehicle? A child?

What happens when self-driving cars inspire road rage?

In a world where autonomous cars co-exist with regular ones, might cautious law-abiding ‘robots’ infuriate impatient humans? Who would be held to blame for disputes?

Should driverless cars make judgements about occupants?

If drinking limits, for example, are set on occupants, should cars be programmed to detect alcohol. And should they de-activate if the humans are under the influence?

How much should a car know about its driver?

Should it track your visits to a known crime scene and pass them to police? Can its systems be made to disclose location information in a divorce case?

Who pays when there’s a collision?

If a driverless car malfunctions and causes an accident, who’s fault is that? The occupant could be to blame. But he might blame the car maker, who might blame the software provider, who might blame the OS maker…

Who pays when there’s a fine?

A driverless car gets a parking ticket because a no-parking zone has just changed. Who’s fault is that? Should the mapping systems have known? Did the local authority inform the car maker? Should it have to?

Can a human driver be penalised for failing to take control?

A self-driving car malfunctions, and alerts its occupant to take the wheel. What if she doesn’t, and there’s an accident?

tim-greenTim Green

Features Editor

MEF Minute

color-linkedin-128 color-twitter-128 color-link-128

One wonders how much attention will be paid to these questions when the trials start in earnest – as they already are in fact.

Just weeks ago, Jaguar and Land Rover revealed a 41 mile ‘living laboratory‘ around Coventry and Solihull in the UK to assess real-world autonomous driving conditions.

They’ll be testing ‘vehicle-to-vehicle’ and ‘vehicle-to-infrastructure systems’ to see how well 100 connected cars perform on real roads.

It’s hard to imagine how they will factor in ethical concerns like driver drunkenness and stray dogs. But they should.

MEF