How Safe Are Self-Driving Cars After the First Fatal Accident?

  • Thread starter Dr. Courtney
  • Start date
  • Tags
    Car Self
In summary, a self-driving Uber car struck and killed a pedestrian in Arizona. The small experimental installed base of self-driving cars raises concerns with the technology, and the tragedy will be scrutinized like no other autonomous vehicle interaction in the past.
  • #176
mfb said:
You can mess with human drivers in many ways as well. Many of these ways are illegal, some ways specific to autonomous cars might become illegal, but overall this is not a big issue today, I don’t see how it would become one in the future.

There is a inherent danger in messing with human drivers because they can also act irrationally.
I hope so but in a culture where automation has eliminated driving and most low skilled jobs young people will find something to do to kill time instead of the 'Tide Pod' challenge.
 
Physics news on Phys.org
  • #177
Passengers in driverless vehicles can act irrationally as well. But I don't think that is what stops people from harming traffic deliberately.
 
  • #178
berkeman said:
That's kind of off-topic for this thread. This thread is about the failure of the self-driving car to see the pedestrian that it hit at night. The links you posted are for a completely different accident (that happened about 10 miles from me) where the driver hit a center divider. You can start a separate thread about that accident if you like -- it's not clear that the car was in self-driving mode, IIRC.
The autopilot was active https://www.theguardian.com/technology/2018/mar/31/tesla-car-crash-autopilot-mountain-view, Tesla confirmed, this is the reason I posted here.
 
  • #179
mfb said:
You can mess with human drivers in many ways as well. Many of these ways are illegal, some ways specific to autonomous cars might become illegal, but overall this is not a big issue today, I don’t see how it would become one in the future.
Perhaps because they are computers people will want to hack them?
 
  • #180
And there are people who want to hack websites. That doesn't stop them from existing, and taking over more and more functions - buying and booking things online, online banking, ...
The danger of typing Robert'); DROP table students;-- didn't exist when everything was done on paper - but that still didn't stop the conversion to computer databases. People found exploits like SQL injection, and other people wrote patches to fix them.

If STOP sign cycling jerseys become a trend and the cars struggle with them, someone will write a patch to better distinguish between cycling jerseys and actual stop signs.
 
  • #181
In a car with lane assist what happens at a symmetrical fork in the road? Do they see this as the road getting wider and send you down the middle?
 
  • Like
Likes russ_watters
  • #183
CWatters said:
In a car with lane assist what happens at a symmetrical fork in the road? Do they see this as the road getting wider and send you down the middle?

 
  • #184
From Fox: "... when in self-driving mode, car relies on human driver to intervene in case of emergency, but doesn't alert the driver ..."
 
  • #185
Bystander said:
From Fox: "... when in self-driving mode, car relies on human driver to intervene in case of emergency, but doesn't alert the driver ..."
Yeah, on the surface of it, that would seem to be a design flaw...
 
  • Like
Likes russ_watters
  • #186
If it could alert the driver it would know something is wrong and could react accordingly. The human is necessary for emergencies the car cannot detect (yet).
 
  • Like
Likes russ_watters
  • #187
NTSB: Uber’s sensors worked; its software utterly failed in fatal crash
https://arstechnica.com/cars/2018/0...led-by-ubers-self-driving-software-ntsb-says/

"
The National Transportation Safety Board has released its preliminary report on the fatal March crash of an Uber self-driving car in Tempe, Arizona. It paints a damning picture of Uber's self-driving technology.

The report confirms that the sensors on the vehicle worked as expected, spotting pedestrian Elaine Herzberg about six seconds prior to impact, which should have given it enough time to stop given the car's 43mph speed.

The problem was that Uber's software became confused, according to the NTSB. "As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path," the report says.

Things got worse from there. ..."
 
  • Like
Likes nsaspook
  • #188
Why is it that the emergency braking maneuver capability that in enabled for human drivers is disabled for computer control because it might cause erratic driving behavior. How so. One would think that such a capability would be a sort of fail safe feature for the computer system which might have an obstacle identification issue. Collision imminent, stop, no other action required.
 
  • Like
Likes russ_watters
  • #189
gleem said:
enabled for human drivers is disabled for computer control because it might cause erratic driving behavior.
Time for criminal negligence proceedings?
 
  • Like
Likes russ_watters
  • #190
Nsaspook... I think that video answers my question.

Looks like it just followed the brightest white line.
 
  • #191
gleem said:
Why is it that the emergency braking maneuver capability that in enabled for human drivers is disabled for computer control because it might cause erratic driving behavior. How so.
I wondered about that too. I was going to post a wise___ comment about erratic braking interrupting me when I was doing my fingernail polish in self-driving mode, but then I found a link that said it was meant to avoid false emergency braking for things like plastic bags blowing across the road. (I didn't save the link) Still, at least enabling the audible beep alert when the car detects something unusual seems so important...
 
  • #192
Yes but it is normally engaged with a human under control, How does a human override it if it is a bag and what if it is not a bag but thinks it is? Considering that it detected the women six second before impact an audible warning seems obvious. "Heads up something you should take a look at"
 
  • #193
gleem said:
Considering that it detected the women six second before impact an audible warning seems obvious. "Heads up something you should take a look at"
Spinnor said:
The problem was that Uber's software became confused, according to the NTSB. "As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path," the report says.

Things got worse from there. ..."
It appears that HAL was conflicted... Open the Pod Bay doors, HAL. HAL?!
 
  • #194
berkeman said:
It appears that HAL was conflicted...
Not true ...

.
 
Last edited:
  • Like
Likes berkeman and Borg
  • #195
gleem said:
Considering that it detected the women six second before impact an audible warning seems obvious. "Heads up something you should take a look at"

Even if you aren't busy on your phone it can take a long time to react to a warning and regain control...

https://jalopnik.com/what-you-can-do-in-the-26-seconds-it-takes-to-regain-co-1791823621

The study, conducted by the University of Southampton, reviewed the driving patterns of 26 men and women in situations, with and without a “distracting non-driving secondary task.”

Here’s the upshot: the study found that drivers in “non-critical conditions”—that is, say, when you’re not texting, fiddling with the radio, eating, or, as the study puts it, “normal conditions”—needed 1.9 to 25.7 seconds to assume control of the autonomous vehicle when prompted. Nearly 26 ******* seconds, and again, that is without the distractions. When participants were distracted, the response time to assume control ranged from 3.17 to 20.99 seconds.

https://www.digitaltrends.com/cars/automated-car-takeover-time/

It’s important to note that the test times didn’t measure driver response to whatever was going on, but just how long it took them to regain control. Then they had to assess the situation and take appropriate action. If you don’t find that a little scary, maybe think again.
 
Last edited:
  • Like
Likes nsaspook
  • #196
Spinnor said:
NTSB: Uber’s sensors worked; its software utterly failed in fatal crash
https://arstechnica.com/cars/2018/0...led-by-ubers-self-driving-software-ntsb-says/

The problem was that Uber's software became confused, according to the NTSB. "As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path," the report says.
What a terrible analysis! There is nothing in the (brief/preliminary) NTSB report that suggests the computer was confused or otherwise failed. A succession of increasingly accurate classifications is exactly what you would expect from a properly operating system. What isn't in the report is how long the sequence took or what the predicted paths and courses of action to avoid the collision were. What also isn't clear is if *all* avoidance actions were disabled or only "emergency" actions (e.g.; are all braking maneuvers to avoid a collision "emergency" actions?).

What I see is a system that - from what we have been told - was working properly but was purposely handicapped so the engineers could focus on other aspects of the system. As if we humans are AI in a video game that can be turned-off so the developers can focus on testing the scenery without getting shot at.
Bystander said:
Time for criminal negligence proceedings?
Yep. Time to subpoena the company org chart and start circling names of people to arrest.

Unfortunately, the driver will need to be first up because according to the wording in the report, she was responsible for avoiding the collision. She could simply be charged under existing distracted driving rules. I say "unfortunately" because it was her manager who assigned her a job description that was illegal at face value. But "just following orders" doesn't alleviate her responsibility. Still, everyone involved in the decisions to not have a dedicated driver (too expensive to staff the car with two people?) and to disable the emergency braking features of the software should be charged.

These companies need to be made to stop using the real world as an R&D environment. Starting to charge engineers and managers with homicide will do that. This incident appears to me to be covered under existing distracted driver laws, but the federal government should step in and set more explicit rules for self-driving cars and their testing.
 
  • Like
Likes NTL2009
  • #197
CWatters said:
Even if you aren't busy on your phone it can take a long time to react to a warning and regain control...
This, to me, is an "uncanny valley" that needs to be addressed better -- again, by law if necessary. It seems clear to me that a responsibility handoff situation is fundamentally incapable of working and should be avoided/not allowed. Drivers just aren't capable of turning on and off their attention in a way these systems need and even if they were, only at an increased reaction time. It also seems to me like the scenario only exists because self-driving features aren't good enough yet.
 
  • Like
Likes nsaspook
  • #198
+1 Pilots have been concerned for some time about the loss of situational awareness that automation can cause. Fortunately they usually have longer to react.
 
  • #199
russ_watters said:
but the federal government should step in and set more explicit rules for self-driving cars and their testing.

I completely agree with this by the way. Either Fed level, or individual states that permit these vehicles to run need clear guidelines, badly.
- - - - -

russ_watters said:
Yep. Time to subpoena the company org chart and start circling names of people to arrest.

Unfortunately, the driver will need to be first up because according to the wording in the report, she was responsible for avoiding the collision. She could simply be charged under existing distracted driving rules. I say "unfortunately" because it was her manager who assigned her a job description that was illegal at face value. But "just following orders" doesn't alleviate her responsibility. Still, everyone involved in the decisions to not have a dedicated driver (too expensive to staff the car with two people?) and to disable the emergency braking features of the software should be charged.

These companies need to be made to stop using the real world as an R&D environment. Starting to charge engineers and managers with homicide will do that.

To be honest, this is the kind of playbook I'd expect from a politically ambitious prosecutor (Bharara, Spitzer, Guiliani, probably Christie). When there is a lack of clear guidelines, aggressive prosecutors looking to promote themselves frequently fill the void. I'm kind of surprised it hasn't happened yet.

It also strikes me as very bad policy. There is a massive body count every year due to 'regular' car accidents in the US. (This towers over murders btw.) It's actually embarrassing how bad car fatalities are every year in the US vs say Ireland or whatever Western European country. Some reading here: https://www.economist.com/united-states/2015/07/04/road-kill note: "In 2014 some 32,675 people were killed in traffic accidents. In 2013, the latest year for which detailed data are available, some 2.3m were injured—or one in 100 licensed drivers."
- - - -
Self-driving cars, from what I can tell, are by far the most promising way out of this. Charging employees and/or companies with felonies in situation where I don't see malevolence-- or a well defined standard of criminal negligence-- seems myopic. We're talking about a single digit body count, possibly double digits over next couple years, from people (i.e. handful of self driving car companies and their personnel) acting in good faith to improve self driving cars (though I'm not happy with the way it's being done -- again rules needed) vs tens of thousands of fatalities every year and millions of injuries, and that's just in the US.
- - - -
My take: get clear rules first, and proceed from there.
 
  • #200
russ_watters said:
Unfortunately, the driver will need to be first up because according to the wording in the report, she was responsible for avoiding the collision. She could simply be charged under existing distracted driving rules
That's just a one opinion from a single report, and it can be squashed as being incorrect.
People hit deer, and other wild animals jutting out into traffic and are not labeled 'distracted'.
The comparison is made, apologies to the lady( victim ) and her family, to present the argument that if it had NOT been a self driving car, would any reasonable person been able to avoid the incident. Pedestrians while walking in the middle of traffic do assume a fair amount of responsibility for their own thought process and actions putting their own life in danger, and a reasonable driver may not able to avoid a collision with such a pedestrian.

As I said previously, the "safety" driver is superfluous for the most part and just there because - maybe the car stalls, flat tire, ... . Monitoring the gauges - have they never heard of data recoding and black boxes for cars. Checking for possible occurring incidents - the computer is supposed to be able to see and monitor and take appropriate action better than a human, so we have been told, and which is quite true if it all works correctly. On the 1% chance if the human is better, then that can be fixed into the AI.

Thing is, there is an expected lot of money to be made with self driving vehicles in the coming future. Right now, my point of view is that the first one off the staring block wins to some extent, and with a sort of 'wild west attitude' ( another my point of view ) and promotional 'this is the next best thing' and 'join the bandwagon or be left behind' puts pressure on the testing to be become live perhaps before it should. Some companies will be fine with more bugs than others, rightly or wrongly. And since transportation departments and cities and towns are all kind of awhgaw on how the AI car of the future will affect the commute and sharing the road of the future, the testing live serves that planning purpose also.

Since there is no oversight on what a safe AI car really is, and the agencies have allowed the live testing, I really do not comprehend how they can turn around and say criminal intent.
 
  • #201
I don't subscribe to the argument "There are billions at stake here, and if people die, well, you got to break some eggs if you want an omelet." That argument can be used for killing a liquor store clerk for a few hundred bucks in the till - differing only in degree. ("Dude was in my way, man.")

I asked four questions in #75:
  • Was the probability of a fatality calculated? If not, why not?
  • What is the acceptable fatality rate?
  • Was there any management pressure for Risk to calculate an arbitrarily low accident/fatality rate?
  • As accidents have occurred, such as the one a year ago in Tempe, how is the model's predictive power compared with data? Is the model adjusted in light of new information? If so, by whom? Who decides whether the new risks are acceptable?
I would argue that before it is ethical to alpha test this software on the street, with innocent and unaware citizenry taking on all the risk, there should be some pretty good answer to these questions.
 
  • Like
Likes russ_watters
  • #202
Vanadium 50 said:
There are billions at stake here
A world economy, and the innovative spring ahead, or keep ahead.
Hi-tech in various forms is sought after.

China has a top down approach, which by this report can beat the world in adoption.
http://fortune.com/2016/04/23/china-self-driving-cars/

Western attitude is industry will sort itself out and adopt standards.
The billions is investment dollars, markets, jobs, and standard of living.
The risk is also taken on by industry and the finance industry.

Now if China beats the world, their product may just become more desirable than anything from the NA, European auto industry.
I have no knowledge of what they consider acceptable risk for a fatality involving a self driving car.

I would bet that more people have been killed from falling off a horse, climbing Mt. Everest, swimming with sharks, than from self driving cars in the last while.
If it seems that those are undertakings self-decided, companies organize said adventures with an associated risk, that of which the public is generally vaguely unaware, and consider acceptable.

Vanadium 50 said:
I asked four questions in #75:
And those are questions the suit from the victim will be asking answers to, while they negotiate for $500M from various entities, which will maybe give the kick in the pants for the NA to get its act together. In addition, If the insurance industry has to pay out big bucks and refuses to sponsor self driving cars without assurances, that's another kick for the industry and government.
 
  • #203
StoneTemplePython said:
I completely agree with this by the way. Either Fed level, or individual states that permit these vehicles to run need clear guidelines, badly.
I said federal in this case because the states aren't stepping up and that would be more expedient. But it is technically a state responsibility.
It also strikes me as very bad policy. There is a massive body count every year due to 'regular' car accidents in the US... Some reading here: https://www.economist.com/united-states/2015/07/04/road-kill note: "In 2014 some 32,675 people were killed in traffic accidents. In 2013, the latest year for which detailed data are available, some 2.3m were injured—or one in 100 licensed drivers."
- - - -
Self-driving cars, from what I can tell, are by far the most promising way out of this. Charging employees and/or companies with felonies in situation where I don't see malevolence-- or a well defined standard of criminal negligence-- seems myopic. We're talking about a single digit body count, possibly double digits over next couple years, from people (i.e. handful of self driving car companies and their personnel) acting in good faith to improve self driving cars (though I'm not happy with the way it's being done -- again rules needed) vs tens of thousands of fatalities every year and millions of injuries, and that's just in the US.
"Malevolence" is intentionally causing harm. We're discussing *negligence* here. Do you not see negligence in instructing a driver to not look at the road?
256bits said:
That's just a one opinion from a single report, and it can be squashed as being incorrect.
The report is from the NTSB. It's purpose is, primarily, to find facts. The NTSB has a well regarded reputation for uncovering the facts of what happens in accidents and is unlikely to be wrong on the critical and well established facts, especially considering just how well documented the facts of this case are. But either way, it's the only report we have so let's not idly speculate about alternate facts.
People hit deer, and other wild animals jutting out into traffic and are not labeled 'distracted'.
It doesn't appear to me that you read the report. "Distracted" is an action, not an outcome. A driver is not labeled "distracted" because they hit something, they are labeled "distracted" because they aren't looking at the road. In this incident, the accident occurred primarily because the driver wasn't watching the road.
...a reasonable driver may not able to avoid a collision with such a pedestrian.
From what is known about this incident, that does not appear to be true.
As I said previously, the "safety" driver is superfluous for the most part...
Please read the NTSB report! What you are saying is simply not true. In this incident, the driver was specifically responsible for emergency braking. It was a defined function of the test being performed that emergency braking by the AI was disabled and the driver was responsible.
Since there is no oversight on what a safe AI car really is, and the agencies have allowed the live testing, I really do not comprehend how they can turn around and say criminal intent.
Please read the NTSB report. It should make the criminal negligence clearer. The law does not require specific definition of every form of negligence (and indeed Google tells me that Arizona is one of two states without a distracted driving law), because that would require infinite prescience. Humans will always be able to think of creative new ways to kill people through neglegence.
 
  • Like
Likes Bystander, nsaspook and berkeman
  • #204
Vanadium 50 said:
I don't subscribe to the argument "There are billions at stake here, and if people die, well, you got to break some eggs if you want an omelet."...
I would argue that before it is ethical to alpha test this software on the street, with innocent and unaware citizenry taking on all the risk, there should be some pretty good answer to these questions.
I really hope no one was after that line of argument (for money or net savings of lives), but I'll put a finer point on it: western law/morality does not allow use of the utilitarian principle on matters of life safety. It is not permissible to knowingly kill or risk killing one group of people in order to save another. Not even if you only knowingly kill one person to save a thousand or a million.

It would be really surprising to me if anyone was going in that direction, but still, the lack of outrage over this is a bit surprising to me given the anti-corporate atmosphere in the US these days. It seems tech companies are immune the outrage the financial and pharma industries get when they are willfully negligent.
 
  • #205
russ_watters said:
I said federal in this case because the states aren't stepping up and that would be more expedient. But it is technically a state responsibility.

"Malevolence" is intentionally causing harm. We're discussing *negligence* here. Do you not see negligence in instructing a driver to not look at the road?

I've spent far too much time with lawyers, I'm afraid.

There are multiple grades of negligence. Only the most severe form is criminal. (And perhaps the most severe form of that is criminally negligence homicide.) There are a lot of lesser grades that may have civil but not criminal implications.

You mentioned "homicide" charges while I mentioned that I didn't think it met "well defined standard of criminal negligence". Whether or not it is "negligent" in instructing a 'driver' of a quasi-automonomous vehicle to not look at the road is one question.

A much stricter question is whether it is criminally negligent in instructing a 'driver' of a quasi-automonomous vehicle to not look at the road. I don't see any case law or precedent suggesting that this higher bar has been met. And as said, the existing statutes don't directly address (or even foresee) such a thing.

And as I've said it's bad policy too.
 
  • #206
StoneTemplePython said:
A much stricter question is whether it is criminally negligent in instructing a 'driver' of a quasi-automonomous vehicle to not look at the road.
Sorry, I haven't been keeping up in this thread. I thought Tesla used CYA and said that the driver was supposed to be paying attention.
 
  • #207
StoneTemplePython said:
There are multiple grades of negligence. Only the most severe form is criminal. (And perhaps the most severe form of that is criminally negligence homicide.) There are a lot of lesser grades that may have civil but not criminal implications.

You mentioned "homicide" charges while I mentioned that I didn't think it met "well defined standard of criminal negligence". Whether or not it is "negligent" in instructing a 'driver' of a quasi-automonomous vehicle to not look at the road is one question.

A much stricter question is whether it is criminally negligent in instructing a 'driver' of a quasi-automonomous vehicle to not look at the road. I don't see any case law or precedent suggesting that this higher bar has been met. And as said, the existing statutes don't directly address (or even foresee) such a thing.
You won't see case law examples because self-driving cars are a new thing. But please make sure that you saw the part in the report about how the emergency braking function of the car was disabled and the driver was therefore explicitly responsible for emergency braking. To me this makes the quasi-autonomous nature of the car irrelevant and this case essentially identical to a texting-and-driving case. There is room to quibble about manslaughter vs homicide, but that isn't a hair I care to split. Either way, I see reason to put many people in jail over this, as is common in texting-and-driving cases. (see: https://www.al.com/news/mobile/index.ssf/2016/02/texting_and_driving_manslaught_3.html )
And as I've said it's bad policy too.
Why? Because expediently getting this tech to market will save lives overall even if the negligence kills a few people? I really hope you aren't suggesting that: please see my post #204.
 
  • #208
berkeman said:
Sorry, I haven't been keeping up in this thread. I thought Tesla used CYA and said that the driver was supposed to be paying attention.

you're breaking my preferred level of abstraction here :wink:

I think the comment was about Uber not Tesla. In any case there is a big difference between 'mere negligence' and the criminal variety which is what I was trying to highlight.
 
  • #209
berkeman said:
Sorry, I haven't been keeping up in this thread. I thought Tesla used CYA and said that the driver was supposed to be paying attention.
Wrong case. You are correct regarding the Tesla that drove under a truck and decapitated a man, but we're discussing the Uber that struck and killed a pedestrian. In the test being conducted, the car was semi-autonomous, but the driver was responsible for emergency braking, as that system was disabled in the software.
 
  • #210
russ_watters said:
I really hope no one was after that line of argument (for money or net savings of lives), but I'll put a finer point on it: western law/morality does not allow use of the utilitarian principle on matters of life safety. It is not permissible to knowingly kill or risk killing one group of people in order to save another.

This really is too simplistic. You are correct in the deterministic case.

The statement is either wrong or much too simplistic in the probabilistic sense of risking lives (which I underlined for emphasis).

If you look at the design and analysis of speed limits for instance, you'll see that policy makers know that more people will die with higher speed limits. (The wonkish sort even have distributions for increased deaths, and the probability of no increased deaths in a one year period when increasing the speed limit from say 30 to 60 mph in a big metropolitan area is basically ##\lt \epsilon## for any ##\epsilon \gt 0## that I can think of.)

It's worth mentioning that fatalities shoot up dramatically with speed limits over ##\approx 30## mph. The fatalities I'm referring to concern not just drivers and passengers in cars, but bicyclists and pedestrians being hit by them. However there are trade offs in terms of huge costs to the economy, etc. from restricting speed limits to under 30 mph. So we don't do that. This is a pragmatic, or if you prefer utilitarian, guided policy.

There's related ideas behind tort and liability laws. This sort of thing is addressed, perhaps too literally, in a Law and Economics course.

So, yes your statement holds deterministically. But when you sprinkle randomness in, it does not hold universally. Some nasty tradeoffs are real and policy makers know this. I can probably drudge up more stuff from a law and econ course I took a while back if needed. Maybe a separate thread. This is very subtle stuff.
 

Similar threads

  • General Discussion
4
Replies
123
Views
10K
  • General Engineering
Replies
19
Views
10K
  • General Discussion
Replies
1
Views
8K
  • Special and General Relativity
Replies
13
Views
2K
Back
Top