Tesla Model 3

Would you get one?

  • Yes

    Votes: 31 83.8%
  • No

    Votes: 6 16.2%

  • Total voters
    37

BirdofPrey

Standards Guru
Sep 3, 2015
797
493
Well, Elon said he wanted to have that integrated into the app if I understood correctly, so ideally an autonomous car would not only get one person, but four who would normally travel a similar route. That way you can reduce traffic down to 1/4th, and everyone going to work/leaving at the same time would help with the efficiency of such a system.
That would be great. It's an uphill battle in most municipalities to get people to carpool.
BTW, I take it from the comments so far that no one has any qualms about letting a computer drive the car even in light of the recent Autopilot fatality?
Nah, at least not in the long term. There's bugs to work out, sure, but it will happen eventually (and from what I read, it was more of a sensor issue than a software one, the car literally could not see that truck). Aside from that, it's an isolated incident, there's stories of the cars stopping themselves when the driver didn't, and people die in car accidents every day anyways; those just aren't the kind of juicy stories the media likes to pick up and run with.
 

iFreilicht

FlexATX Authority
Feb 28, 2015
3,243
2,361
freilite.com
BTW, I take it from the comments so far that no one has any qualms about letting a computer drive the car even in light of the recent Autopilot fatality?

Not really. We all know that the current autopilot is not ready for fully autonomous driving, there are enough instances where it lost track of the road for a short while and stuff like that. We're coming closer and closer to fully autonomous driving, and I fully expect that in 10 years, it will be much safer than a human operating a car, even if now and then crashes will still happen.

What caused this crash is certainly fixable.
 
  • Like
Reactions: Biowarejak

Phuncz

Lord of the Boards
SFFn Staff
May 9, 2015
5,937
4,951
Nah, at least not in the long term. There's bugs to work out, sure, but it will happen eventually (and from what I read, it was more of a sensor issue than a software one, the car literally could not see that truck). Aside from that, it's an isolated incident, there's stories of the cars stopping themselves when the driver didn't, and people die in car accidents every day anyways; those just aren't the kind of juicy stories the media likes to pick up and run with.
Yeah I fully agree.

Popular media doesn't care about relativity, just views and clicks. That there's one fatality amongst the many, many more each day due to human negligance, carelessness, overconfidance, stress or plain YOLO, doesn't matter to them "old is always better" folk. I thought it was remarkable that there only has been one fatality yet, regretably that it still happened, even though the technology is still pioneeringly new.

We shouldn't kid ourselves: self-driving cars will not eliminate transportation deaths. Google, Microsoft and Apple can't even keep their massively redundant and extremely robust server infrastructure up a 100% a year, so why shouldn't we expect hardware failures and technical design errors in autonomous transportation ?

And let's not forget what will happen when companies start to lower costs "creatively". In the end I do believe road travel will be much safer, but it's never going to be the utopia that is sometimes being expected.
 
  • Like
Reactions: Biowarejak

PlayfulPhoenix

Founder of SFF.N
SFFLAB
Chimera Industries
Gold Supporter
Feb 22, 2015
1,052
1,990
Not really. We all know that the current autopilot is not ready for fully autonomous driving, there are enough instances where it lost track of the road for a short while and stuff like that. We're coming closer and closer to fully autonomous driving, and I fully expect that in 10 years, it will be much safer than a human operating a car, even if now and then crashes will still happen.

What caused this crash is certainly fixable.

I agree, but I think many are quick to forget (or not even think about) some of the moral implications that exist when we have to defer to autonomous machines to make life-threatening decisions for us.

For example, say you're in your Tesla alone, going down a stretch of road, and as you round a bend, the car sees a family of four (with two kids) crossing your lane. Your car has only two options at this point: Plow through the family, killing them all but keeping you alive; or swerve directly into the concrete median, sparing the family but likely killing you. Which should the car do? Should it want to save the most people, or save the driver first and foremost? Which would you want to drive? Which would you want everyone else to drive?

What about insurance? What if my autonomous car kills a pedestrian? Should I be liable? My insurance? Tesla? What if an autonomous vehicle and a human driver are in an accident, who's at fault? Can that be determined?

The end state is fantastic, but the transitional phase is going to be very, very ugly. And there are decisions that have to be made about how autonomous vehicles will behave and choose, which are difficult to foresee, let alone decide upon.

I don't think the technological hurdles are as high as these. Even if the technology is perfect (which it isn't), that doesn't really matter if people don't trust it.
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
I agree, but I think many are quick to forget (or not even think about) some of the moral implications that exist when we have to defer to autonomous machines to make life-threatening decisions for us.

For example, say you're in your Tesla alone, going down a stretch of road, and as you round a bend, the car sees a family of four (with two kids) crossing your lane. Your car has only two options at this point: Plow through the family, killing them all but keeping you alive; or swerve directly into the concrete median, sparing the family but likely killing you. Which should the car do? Should it want to save the most people, or save the driver first and foremost? Which would you want to drive? Which would you want everyone else to drive?
The Trolley Problem is a problem for humans too. One slight difference is that for an autonomous vehicle, the entire situation can be analysed after-the-fact in minute detail and future behaviour modified. For a human accident with driver fatality, the chance of knowing what they did and why is reduced to inferences from forensic examination. Even if Automobile Black Boxes (ABB) were mandated, that would not provide the reasoning behind actions.
What about insurance? What if my autonomous car kills a pedestrian? Should I be liable? My insurance? Tesla? What if an autonomous vehicle and a human driver are in an accident, who's at fault? Can that be determined?
Like with airline accidents, fault between a system issue or a pilot error will likely be determined by whether the driver made an input. In a hypothetical fully autonomous (i.e. no driver controls) vehicle the liability would likely be dealt with by a new type of insurance policy dedicate to driverless vehicles. Having manufacturers liable would only work if an accident was a result of a systemic issue rather than an individual failure, and that would be something for a third party investigative body to determine after the fact.
In a selectively drivable semi-autonomous vehicle lime the current Tesla, liability would likely devolve to the driver in all cases, as the driver is advised to remain prepared to take control of the car even without notice from the hazard warning system. This will change as vehicles become more capable, and insurers change their offerings to deal with a shift in liability.
 
  • Like
Reactions: Biowarejak

jeshikat

Jessica. Wayward SFF.n Founder
Silver Supporter
Feb 22, 2015
4,969
4,783
Which should the car do? Should it want to save the most people, or save the driver first and foremost? Which would you want to drive? Which would you want everyone else to drive?

I imagine most people would like to think they're altruistic enough to save the family if they were driving instead of a computer, so I don't see why that logic wouldn't extend to self-driving cars.

What if an autonomous vehicle and a human driver are in an accident, who's at fault? Can that be determined?

That will probably be easier to determine in most cases actually since the autonomous car will have a wealth of telemetry to analyze in the moments leading up to the accident. Google has run into the issue though where the human driver rear ends the autonomous car because it's overly cautious and brakes often. Arguable the accident wouldn't have happened with a human driving instead of the computer but in the end the other drive is at fault for not maintaining a safe distance.
 
  • Like
Reactions: Biowarejak

PlayfulPhoenix

Founder of SFF.N
SFFLAB
Chimera Industries
Gold Supporter
Feb 22, 2015
1,052
1,990
The Trolley Problem is a problem for humans too. One slight difference is that for an autonomous vehicle, the entire situation can be analysed after-the-fact in minute detail and future behaviour modified. For a human accident with driver fatality, the chance of knowing what they did and why is reduced to inferences from forensic examination. Even if Automobile Black Boxes (ABB) were mandated, that would not provide the reasoning behind actions.

It's a famous problem, but figuring out "what happened" isn't what makes it challenging. What makes it challenging is that you have to make the choice.

Right now, society is comfortable with each individual driver making the choice of what to do in that scenario. In a future of autonomous vehicles, we're either leaving automobile companies or the government up to make that choice.

You'd no longer have that choice. You'd no longer get to decide whether or not your own car could deliberately choose to kill you, due to something entirely out of your control.

In a hypothetical fully autonomous (i.e. no driver controls) vehicle the liability would likely be dealt with by a new type of insurance policy dedicate to driverless vehicles. Having manufacturers liable would only work if an accident was a result of a systemic issue rather than an individual failure, and that would be something for a third party investigative body to determine after the fact.

Who makes whole the remaining relatives of the deceased driver or family in my above example? That's a deliberate choice by an automotive company with no 'right' answer. What if a potential road hazard, unforeseen by an automotive company, causes an autonomous vehicle to careen off a bridge and kill a crowd? Is that a claim against insurance or faulty software?

There are countless examples like this. There will be countless accidents like this. Perfect information does not resolve the ambiguity of an imperfect world.

I imagine most people would like to think they're altruistic enough to save the family if they were driving instead of a computer, so I don't see why that logic wouldn't extend to self-driving cars.

I guarantee you that there will be millions of people who will simply refuse to be in a car that would choose to kill the passengers in any context. "Why should my own car kill me if a family is crossing a road illegally? Why do I care about saving lives across society, if it means killing more passengers, which is what I am? What if the car is wrong, and it could have avoided hurting anyone? Do I really trust some developers I don't know, and technology that can be hacked, with the lives of my children? Do I trust AI?"

And that means millions of people who will resist or outright-fight a transition to all-autonomous vehicles. Enough to kill or seriously setback that change. This is a very real problem.

Google has run into the issue though where the human driver rear ends the autonomous car because it's overly cautious and brakes often. Arguable the accident wouldn't have happened with a human driving instead of the computer but in the end the other drive is at fault for not maintaining a safe distance.

This friction, this incongruity, this issue of co-habitation is what I'm driving at (no pun intended). Again, the biggest challenge for autonomous driving isn't the technology, it's the cultural and psychological realities of people. People driving alongside autonomous vehicles is going to be ugly. Vehicles making ethical decisions for us is going to be ugly. We cannot assume that society will see the objective benefits of technology and accept them without fear. And there are profound ethical questions and realities that we have to tackle, and we just haven't.

Society has proven itself great at technological progress, but terrible at ethics. If we don't proactively enable our ethical schools of thought to catch up to our engineering acumen, we're going to scare and hurt and kill a lot of people. But nobody's talking about it - not politicians, not governments, not the people buying the cars, and certainly not the people making them.

I'm excited for the future, and for technologies like these, I really am. I consider myself an optimist about the future. But stuff like this, frankly, should terrify us. There's as much potential downside to this stuff as there is upside. The status quo of how we handle that intersection of technology and society and ethics is not enough.
 

Kmpkt

Innovation through Miniaturization
KMPKT
Feb 1, 2016
3,382
5,936
Pretty sure even with the one fatality and two crashes with autopilot, the crashes per kilometre driven are far below the human driven numbers.
 
  • Like
Reactions: Phuncz

jeshikat

Jessica. Wayward SFF.n Founder
Silver Supporter
Feb 22, 2015
4,969
4,783
Do I really trust some developers I don't know, and technology that can be hacked, with the lives of my children? Do I trust AI?"

But people have to make that same choice every time they get in a car as a passenger with a human driver they don't know. Who's to say your Uber driver isn't a die-hard PETA member who would choose to swerve the car off a cliff to avoid hitting a dog?

This friction, this incongruity, this issue of co-habitation is what I'm driving at (no pun intended). Again, the biggest challenge for autonomous driving isn't the technology, it's the cultural and psychological realities of people. People driving alongside autonomous vehicles is going to be ugly.

Humans driving among humans is already ugly though, 90% of car accidents are due to human error. Tesla claims that Autopilot is already safer per mile driven than human drivers and it's only going to get better as the technology improves and more data is collected to train the system.

Society has proven itself great at technological progress, but terrible at ethics. If we don't proactively enable our ethical schools of thought to catch up to our engineering acumen, we're going to scare and hurt and kill a lot of people.

Autonomous cars will no doubt make ethically questionable choices as a result of their algorithms that will result in human deaths, but even if they're not perfect, once they're undeniably proven to be safer than letting humans drive, I'd say it's unethical to continue letting fallible humans continue to drive themselves.
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
Society has proven itself great at technological progress, but terrible at ethics. If we don't proactively enable our ethical schools of thought to catch up to our engineering acumen, we're going to scare and hurt and kill a lot of people. But nobody's talking about it - not politicians, not governments, not the people buying the cars, and certainly not the people making them.
You might be surprised. Even a decade ago, part of my Systems & Control Engineering course was devoted to legal and ethical considerations of systems. And in the wider field of general AI research, the ethical issues, and even future potential ethical issues, have been retrod many times and will continue to be by the researchers developing those systems.
Public debate is often far behind actual policy, though. Take 'killer drones' for example: the worry about an autonomous vehicle making decisions - without a human in the loop - on target acquisition and final kill decision. Many are worried we might soon have robots making this choice.
These people are wrong. We've had robots making this choice for decades, and they have already been used in anger. Modern long-range cruise missiles and air-to-air missiles, as part of their terminal guidance packages, perform completely autonomous target identification, discrimination, and go/abort decisionmaking. These are not merely programmed to "go kill this human-designated target" (though they more often than not ARE programmed to do that, due to rules of engagement), they are flexible enough to be told "go in this direction, and attack targets of opportunity" with the missile(s) identifying potential targets, picking targets, and making the decision to attack or not. In the case of long-range anti-shipping missiles, these robots can even network to distribute functionality and decisionmaking heterogeneously (e.g. one missile in a swarm 'pops up' from sea-skimming mode to get a better view, at the expense of its own survivability). There is the Torpedo Mine, in use since the late 70s, another device that makes complete autonomous decisions about when to fire and what to fire at. The venerable Tomahawk, known mainly for it's capability to target a GPS coordinate or fly along a pre-set terrain map, also has the capability to be given an area and the characteristics of a target (e.g. look for an object of these dimensions and temperature that corresponds to a tank) with the missile making all the decisions as to what constitutes a valid target or whether to dive into the landscape inert.
The primary difference between these and proposed 'killer drones' is that you might get your drone back again.
 

Phuncz

Lord of the Boards
SFFn Staff
May 9, 2015
5,937
4,951
Very interesting points are being made, very informative and also mind-boggling sometimes. We should definitely have Elon Musk as a guest on the podcast.
 

jØrd

S̳C̳S̳I̳ ̳f̳o̳r̳ ̳l̳i̳f̳e̳
sudocide.dev
SFFn Staff
Gold Supporter
LOSIAS
Jul 19, 2015
818
1,359
Very interesting points are being made, very informative and also mind-boggling sometimes. We should definitely have Elon Musk as a guest on the podcast.
We could totally tie it in, the model 3 is basically an SFF data center
 

onlyabloke

Cable-Tie Ninja
Jul 22, 2016
178
193
pcpartpicker.com
It's possible, but unlikely. Like one of the first repliers said I don't want to deal with first gen issues for a car like this. Plus with a range of 215 miles..that kind of limits your travel capabilities unless you know where more stations are. My kia goes nearly 500 miles on a tank of gas and costs a few grand less.

I don't know. We'll see when the time comes.
 

iFreilicht

FlexATX Authority
Feb 28, 2015
3,243
2,361
freilite.com
You'd no longer have that choice. You'd no longer get to decide whether or not your own car could deliberately choose to kill you, due to something entirely out of your control.

I'd argue that you don't even have that choice now in the scenario you describe, at least if "choice" is something to attribute to free will. If you are faced with that situation, you will act instinctively, and what that action is will depend on your training and previous experiences with similar situations.

What seems to be dismissed often is that autonomous cars should prevent these situations from occurring in the first place. Speed limits exist for a reason and are legally binding, and we have more guidelines like minimum safe distances that we learn about before getting our drivers license. I'd argue that most accidents are caused by humans not adhering to those rules. A car turning around a corner which notices a family shouldn't have to decide between killing you or (part of) the family because it should be driving slow enough to be able to break safely anyway. The trolley problem isn't solvable, so it should be circumvented.
 
  • Like
Reactions: Biowarejak and EdZ

PlayfulPhoenix

Founder of SFF.N
SFFLAB
Chimera Industries
Gold Supporter
Feb 22, 2015
1,052
1,990
Pretty sure even with the one fatality and two crashes with autopilot, the crashes per kilometre driven are far below the human driven numbers.

But people have to make that same choice every time they get in a car as a passenger with a human driver they don't know. Who's to say your Uber driver isn't a die-hard PETA member who would choose to swerve the car off a cliff to avoid hitting a dog?

I think I'm poorly conveying my point - sometimes, a hundred words work when a thousand won't do.

It doesn't matter if computers/AI are 'better' by any objective measure than humans at something. Many people are deeply uncomfortable with the moral ramifications of having non-humans make ethical decisions for us. And they only need one counter-example to spark their fears. They already have several.

The answer to 'Why should I trust an autonomous vehicles with the lives of my family' shouldn't be 'cars deciding for us will, in the long run, be better for society'. You're not answering their question and addressing their fears; if anything, you're only cementing them.

When you don't consider the human psychology and societal norms about stuff like this, you get anti-LGBTQ bigots, anti-vaccination parents, and anti-GMO activists. No amount of evidence in the world will change the minds of these groups. And yet they delayed the rights of LGBTQ individuals, caused outbreaks that killed children, and denied the farmers of starving communities access to critically-needed seeds. They hurt people, because we sat on high horses and didn't know how to empathize with their very real fears.

We have to do better. Much, much better.
 
  • Like
Reactions: Biowarejak

Phuncz

Lord of the Boards
SFFn Staff
May 9, 2015
5,937
4,951
I have the same fear as you, I'm afraid people will think about themselves, ignorantly, and not think about the progress of society and humanity. Even though people will undoubtibly have had the same fear-mongering when the industrial revolution began. If everyone's mind was more set on the community instead of the singular person, we'd be much better off.
 

K888D

SFF Guru
Lazer3D
Feb 23, 2016
1,483
2,970
www.lazer3d.com
I have the same fear as you, I'm afraid people will think about themselves, ignorantly, and not think about the progress of society and humanity. Even though people will undoubtibly have had the same fear-mongering when the industrial revolution began. If everyone's mind was more set on the community instead of the singular person, we'd be much better off.
The UK is an example of this with the Brexit decision.
 
  • Like
Reactions: Biowarejak