Voices

No, Mercedes-Benz does not think autonomous cars should always protect their occupants

Automation raises some very tricky philosophical dilemmas, not least of which is the “trolley question”.

The “trolley question” is simple ethics problem. You have a loaded trolley barrelling down a railway track. Ahead of the trolley are five people, tied to the rails. The trolley will strike and kill them. However, you are standing by a set of points and have the chance to divert the trolley down a different track, saving the five…but instead killing one person who is tied to that set of rails.

What do you do?

At its simplest, you’d naturally pull the lever. But the real world is never that simple, and indeed many people choose to let the five die rather than interfere. Yet that’s just for starters. How about these scenarios:

  • five criminals, one saint
  • one young person, five really old people

Hard to decide? Still not even close to a real-world scenario, so let’s introduce some uncertainty. What about:

  • a high chance of killing one person vs a low chance of killing five
  • maiming five people, or killing one

There’s no end of complications. What if the one person was your closest friend, the five others were not five but fifty? In the case of the car, would it make a difference if you owned the car, should it prioritise your life over that of others?

Some say it’s a moot point because the situation will be very rare. That sort of thinking was on display when the Titantic was designed too. No matter how good the tech, you need to consider the worst-case scenario, and even 99.999% of journeys happen without incident, given the number of journeys worldwide that’s still a lot of problems to deal with. And, that car has to be programmed *before* it has a problem, it’s not a case of dealing with the aftermath of an accident and deciding blame.

We could explore this interesting and somewhat disturbing question forever, but that’s not the point of this post. The point is that it’s a problem nobody has solved, and that is why I was surprised to see that Mercedes-Benz apparently had a corporate position on the matter, and even more surprised because when I talked to their safety experts myself recently they were quite clear that problem hadn’t been resolved and they were staying out of it.

Anyway, the source of the statement was Christoph von Hugo, Mercedes-Benz’s manager of driver assistance systems and active safety who said at the Paris Motor Show:

“If you know you can save at least one person, at least save that one. Save the one in the car” and “If all you know for sure is that one death can be prevented, then that’s your first priority”.

This looks like a rather specific part-answer to a specific scenario, and unfortunately the full context wasn’t available. So I called Mercedes-Benz Australia, and asked about their position on the “trolley problem”. Jerry Stamoulis, manager of public relations, answered with:

“This is not yet resolved as there are many, if not thousands of situations being assessed. The ‘trolley problem’ is a hot topic at the moment and today we don’t have all the answers and nor do we have autonomous cars.” He went on to say that “Mercedes-Benz has always prioritised safety for all road users and I’m confident that any new technology we launch will continue to save lives rather than put anyone at risk.”

So, let’s be quite clear; Mercedes-Benz have not solved the trolley problem, and do not have a firm position on it, as you’d expect. No individual car company is going to solve this problem. However, the trolley problem is a dark cloud over the future of autonomous driving, because it is a question that must be addressed before self-driving cars go much further. But why now, you ask? Haven’t we faced this problem or a variant of it since motorised transport was invented?

The answer is no. Let’s take an example of a human driver in a manually controlled car rounding a corner and having to choose between driving off a cliff or running over a pedestrian. The human driver will react by instinct in a split second, and use whatever information is available to him or her. It will be an unfortunate accident either way.

Contrast that with a computer-controlled car. Confronted with the same situation, the computer would use its pre-programmed logic to determine what to do. And that is the problem; who is going to design the logic of the car such that it makes life and death decisions? Someone needs to sit down, and pre-determine what will happen in critical situations ahead of time, a task for which there are no words to adequately describe the complexity, difficulty and importance not just for the lives of future people, but the effect on society as a whole. The programming of the computer will be difficult, but that is easy compared to deciding the logic of the car’s behaviour.

Ironically, lack of data for the car to make a decision is not a problem – it is too much data. The car is likely to know precisely how many passengers it has, their age, social records and health. It will scan ahead and determine similar data from other car occupants and pedestrians, as the technology in our smartphones today will look primitive in ten years time, let alone twenty. The car will know how far away the nearest hospital is, who has insurance, and who does not. It will have mountains of data, but not enough information, and even if it did have information then fundamentally the trolley problem is a judgement call of ethics. And that’s one humankind hasn’t yet agreed a solution for, let alone Mercedes-Benz.

Further reading


No Comment

Leave a reply

Your email address will not be published. Required fields are marked *

Robert Pepper

Robert Pepper