The
Trolley Problem is a problem for humans too. One slight difference is that for an autonomous vehicle, the entire situation can be analysed after-the-fact in minute detail and future behaviour modified. For a human accident with driver fatality, the chance of knowing what they did and why is reduced to inferences from forensic examination. Even if Automobile Black Boxes (ABB) were mandated, that would not provide the reasoning behind actions.
It's a famous problem, but figuring out "what happened" isn't what makes it challenging. What makes it challenging is that
you have to
make the choice.
Right now, society is comfortable with each individual driver making the choice of what to do in that scenario. In a future of autonomous vehicles, we're either leaving automobile companies or the government up to make that choice.
You'd no longer have that choice. You'd no longer get to decide whether or not your own car could deliberately choose to kill you, due to something entirely out of your control.
In a hypothetical fully autonomous (i.e. no driver controls) vehicle the liability would likely be dealt with by a new type of insurance policy dedicate to driverless vehicles. Having manufacturers liable would only work if an accident was a result of a systemic issue rather than an individual failure, and that would be something for a third party investigative body to determine after the fact.
Who makes whole the remaining relatives of the deceased driver or family in my above example? That's a deliberate choice by an automotive company with no 'right' answer. What if a potential road hazard, unforeseen by an automotive company, causes an autonomous vehicle to careen off a bridge and kill a crowd? Is that a claim against insurance or faulty software?
There are countless examples like this. There will be countless accidents like this. Perfect information does not resolve the ambiguity of an imperfect world.
I imagine most people would like to think they're altruistic enough to save the family if they were driving instead of a computer, so I don't see why that logic wouldn't extend to self-driving cars.
I
guarantee you that there will be millions of people who will simply refuse to be in a car that would choose to kill the passengers in
any context. "Why should my own car kill me if a family is crossing a road illegally? Why do I care about saving lives across society, if it means killing more passengers, which is what
I am? What if the car is wrong, and it could have avoided hurting anyone? Do I really trust some developers I don't know, and technology that can be hacked, with the lives of my children? Do I trust AI?"
And that means millions of people who will resist or outright-fight a transition to all-autonomous vehicles. Enough to kill or seriously setback that change. This is a very real problem.
Google has run into the issue though where the human driver rear ends the autonomous car because it's overly cautious and brakes often. Arguable the accident wouldn't have happened with a human driving instead of the computer but in the end the other drive is at fault for not maintaining a safe distance.
This friction, this incongruity, this issue of co-habitation is what I'm driving at (no pun intended). Again, the biggest challenge for autonomous driving isn't the technology, it's the cultural and psychological realities of people. People driving alongside autonomous vehicles is going to be ugly. Vehicles making ethical decisions for us is going to be ugly. We cannot assume that society will see the objective benefits of technology and accept them without fear. And there are profound ethical questions and realities that we have to tackle, and we just haven't.
Society has proven itself great at technological progress, but terrible at ethics. If we don't proactively enable our ethical schools of thought to catch up to our engineering acumen, we're going to scare and hurt and kill a lot of people. But nobody's talking about it - not politicians, not governments, not the people buying the cars, and certainly not the people making them.
I'm excited for the future, and for technologies like these, I really am. I consider myself an optimist about the future. But stuff like this, frankly, should terrify us. There's as much potential downside to this stuff as there is upside. The status quo of how we handle that intersection of technology and society and ethics is not enough.