How Safe Are Self-Driving Cars After the First Fatal Accident?

  • Thread starter Thread starter Dr. Courtney
  • Start date Start date
  • Tags Tags
    Car Self
AI Thread Summary
A self-driving Uber vehicle struck and killed a pedestrian in Arizona, marking the first fatal incident involving autonomous cars. The event raises significant concerns about the safety and readiness of self-driving technology, especially given the limited number of vehicles in operation. Discussions highlight the potential for engineers to analyze the incident thoroughly, which could lead to improvements across all autonomous vehicles. There are debates about the legal implications of the accident, particularly regarding the accountability of the vehicle's operator and the technology itself. Ultimately, the incident underscores the complexities of integrating self-driving cars into public spaces and the necessity for rigorous safety standards.
  • #201
I don't subscribe to the argument "There are billions at stake here, and if people die, well, you got to break some eggs if you want an omelet." That argument can be used for killing a liquor store clerk for a few hundred bucks in the till - differing only in degree. ("Dude was in my way, man.")

I asked four questions in #75:
  • Was the probability of a fatality calculated? If not, why not?
  • What is the acceptable fatality rate?
  • Was there any management pressure for Risk to calculate an arbitrarily low accident/fatality rate?
  • As accidents have occurred, such as the one a year ago in Tempe, how is the model's predictive power compared with data? Is the model adjusted in light of new information? If so, by whom? Who decides whether the new risks are acceptable?
I would argue that before it is ethical to alpha test this software on the street, with innocent and unaware citizenry taking on all the risk, there should be some pretty good answer to these questions.
 
  • Like
Likes russ_watters
Physics news on Phys.org
  • #202
Vanadium 50 said:
There are billions at stake here
A world economy, and the innovative spring ahead, or keep ahead.
Hi-tech in various forms is sought after.

China has a top down approach, which by this report can beat the world in adoption.
http://fortune.com/2016/04/23/china-self-driving-cars/

Western attitude is industry will sort itself out and adopt standards.
The billions is investment dollars, markets, jobs, and standard of living.
The risk is also taken on by industry and the finance industry.

Now if China beats the world, their product may just become more desirable than anything from the NA, European auto industry.
I have no knowledge of what they consider acceptable risk for a fatality involving a self driving car.

I would bet that more people have been killed from falling off a horse, climbing Mt. Everest, swimming with sharks, than from self driving cars in the last while.
If it seems that those are undertakings self-decided, companies organize said adventures with an associated risk, that of which the public is generally vaguely unaware, and consider acceptable.

Vanadium 50 said:
I asked four questions in #75:
And those are questions the suit from the victim will be asking answers to, while they negotiate for $500M from various entities, which will maybe give the kick in the pants for the NA to get its act together. In addition, If the insurance industry has to pay out big bucks and refuses to sponsor self driving cars without assurances, that's another kick for the industry and government.
 
  • #203
StoneTemplePython said:
I completely agree with this by the way. Either Fed level, or individual states that permit these vehicles to run need clear guidelines, badly.
I said federal in this case because the states aren't stepping up and that would be more expedient. But it is technically a state responsibility.
It also strikes me as very bad policy. There is a massive body count every year due to 'regular' car accidents in the US... Some reading here: https://www.economist.com/united-states/2015/07/04/road-kill note: "In 2014 some 32,675 people were killed in traffic accidents. In 2013, the latest year for which detailed data are available, some 2.3m were injured—or one in 100 licensed drivers."
- - - -
Self-driving cars, from what I can tell, are by far the most promising way out of this. Charging employees and/or companies with felonies in situation where I don't see malevolence-- or a well defined standard of criminal negligence-- seems myopic. We're talking about a single digit body count, possibly double digits over next couple years, from people (i.e. handful of self driving car companies and their personnel) acting in good faith to improve self driving cars (though I'm not happy with the way it's being done -- again rules needed) vs tens of thousands of fatalities every year and millions of injuries, and that's just in the US.
"Malevolence" is intentionally causing harm. We're discussing *negligence* here. Do you not see negligence in instructing a driver to not look at the road?
256bits said:
That's just a one opinion from a single report, and it can be squashed as being incorrect.
The report is from the NTSB. It's purpose is, primarily, to find facts. The NTSB has a well regarded reputation for uncovering the facts of what happens in accidents and is unlikely to be wrong on the critical and well established facts, especially considering just how well documented the facts of this case are. But either way, it's the only report we have so let's not idly speculate about alternate facts.
People hit deer, and other wild animals jutting out into traffic and are not labeled 'distracted'.
It doesn't appear to me that you read the report. "Distracted" is an action, not an outcome. A driver is not labeled "distracted" because they hit something, they are labeled "distracted" because they aren't looking at the road. In this incident, the accident occurred primarily because the driver wasn't watching the road.
...a reasonable driver may not able to avoid a collision with such a pedestrian.
From what is known about this incident, that does not appear to be true.
As I said previously, the "safety" driver is superfluous for the most part...
Please read the NTSB report! What you are saying is simply not true. In this incident, the driver was specifically responsible for emergency braking. It was a defined function of the test being performed that emergency braking by the AI was disabled and the driver was responsible.
Since there is no oversight on what a safe AI car really is, and the agencies have allowed the live testing, I really do not comprehend how they can turn around and say criminal intent.
Please read the NTSB report. It should make the criminal negligence clearer. The law does not require specific definition of every form of negligence (and indeed Google tells me that Arizona is one of two states without a distracted driving law), because that would require infinite prescience. Humans will always be able to think of creative new ways to kill people through neglegence.
 
  • Like
Likes Bystander, nsaspook and berkeman
  • #204
Vanadium 50 said:
I don't subscribe to the argument "There are billions at stake here, and if people die, well, you got to break some eggs if you want an omelet."...
I would argue that before it is ethical to alpha test this software on the street, with innocent and unaware citizenry taking on all the risk, there should be some pretty good answer to these questions.
I really hope no one was after that line of argument (for money or net savings of lives), but I'll put a finer point on it: western law/morality does not allow use of the utilitarian principle on matters of life safety. It is not permissible to knowingly kill or risk killing one group of people in order to save another. Not even if you only knowingly kill one person to save a thousand or a million.

It would be really surprising to me if anyone was going in that direction, but still, the lack of outrage over this is a bit surprising to me given the anti-corporate atmosphere in the US these days. It seems tech companies are immune the outrage the financial and pharma industries get when they are willfully negligent.
 
  • #205
russ_watters said:
I said federal in this case because the states aren't stepping up and that would be more expedient. But it is technically a state responsibility.

"Malevolence" is intentionally causing harm. We're discussing *negligence* here. Do you not see negligence in instructing a driver to not look at the road?

I've spent far too much time with lawyers, I'm afraid.

There are multiple grades of negligence. Only the most severe form is criminal. (And perhaps the most severe form of that is criminally negligence homicide.) There are a lot of lesser grades that may have civil but not criminal implications.

You mentioned "homicide" charges while I mentioned that I didn't think it met "well defined standard of criminal negligence". Whether or not it is "negligent" in instructing a 'driver' of a quasi-automonomous vehicle to not look at the road is one question.

A much stricter question is whether it is criminally negligent in instructing a 'driver' of a quasi-automonomous vehicle to not look at the road. I don't see any case law or precedent suggesting that this higher bar has been met. And as said, the existing statutes don't directly address (or even foresee) such a thing.

And as I've said it's bad policy too.
 
  • #206
StoneTemplePython said:
A much stricter question is whether it is criminally negligent in instructing a 'driver' of a quasi-automonomous vehicle to not look at the road.
Sorry, I haven't been keeping up in this thread. I thought Tesla used CYA and said that the driver was supposed to be paying attention.
 
  • #207
StoneTemplePython said:
There are multiple grades of negligence. Only the most severe form is criminal. (And perhaps the most severe form of that is criminally negligence homicide.) There are a lot of lesser grades that may have civil but not criminal implications.

You mentioned "homicide" charges while I mentioned that I didn't think it met "well defined standard of criminal negligence". Whether or not it is "negligent" in instructing a 'driver' of a quasi-automonomous vehicle to not look at the road is one question.

A much stricter question is whether it is criminally negligent in instructing a 'driver' of a quasi-automonomous vehicle to not look at the road. I don't see any case law or precedent suggesting that this higher bar has been met. And as said, the existing statutes don't directly address (or even foresee) such a thing.
You won't see case law examples because self-driving cars are a new thing. But please make sure that you saw the part in the report about how the emergency braking function of the car was disabled and the driver was therefore explicitly responsible for emergency braking. To me this makes the quasi-autonomous nature of the car irrelevant and this case essentially identical to a texting-and-driving case. There is room to quibble about manslaughter vs homicide, but that isn't a hair I care to split. Either way, I see reason to put many people in jail over this, as is common in texting-and-driving cases. (see: https://www.al.com/news/mobile/index.ssf/2016/02/texting_and_driving_manslaught_3.html )
And as I've said it's bad policy too.
Why? Because expediently getting this tech to market will save lives overall even if the negligence kills a few people? I really hope you aren't suggesting that: please see my post #204.
 
  • #208
berkeman said:
Sorry, I haven't been keeping up in this thread. I thought Tesla used CYA and said that the driver was supposed to be paying attention.

you're breaking my preferred level of abstraction here :wink:

I think the comment was about Uber not Tesla. In any case there is a big difference between 'mere negligence' and the criminal variety which is what I was trying to highlight.
 
  • #209
berkeman said:
Sorry, I haven't been keeping up in this thread. I thought Tesla used CYA and said that the driver was supposed to be paying attention.
Wrong case. You are correct regarding the Tesla that drove under a truck and decapitated a man, but we're discussing the Uber that struck and killed a pedestrian. In the test being conducted, the car was semi-autonomous, but the driver was responsible for emergency braking, as that system was disabled in the software.
 
  • #210
russ_watters said:
I really hope no one was after that line of argument (for money or net savings of lives), but I'll put a finer point on it: western law/morality does not allow use of the utilitarian principle on matters of life safety. It is not permissible to knowingly kill or risk killing one group of people in order to save another.

This really is too simplistic. You are correct in the deterministic case.

The statement is either wrong or much too simplistic in the probabilistic sense of risking lives (which I underlined for emphasis).

If you look at the design and analysis of speed limits for instance, you'll see that policy makers know that more people will die with higher speed limits. (The wonkish sort even have distributions for increased deaths, and the probability of no increased deaths in a one year period when increasing the speed limit from say 30 to 60 mph in a big metropolitan area is basically ##\lt \epsilon## for any ##\epsilon \gt 0## that I can think of.)

It's worth mentioning that fatalities shoot up dramatically with speed limits over ##\approx 30## mph. The fatalities I'm referring to concern not just drivers and passengers in cars, but bicyclists and pedestrians being hit by them. However there are trade offs in terms of huge costs to the economy, etc. from restricting speed limits to under 30 mph. So we don't do that. This is a pragmatic, or if you prefer utilitarian, guided policy.

There's related ideas behind tort and liability laws. This sort of thing is addressed, perhaps too literally, in a Law and Economics course.

So, yes your statement holds deterministically. But when you sprinkle randomness in, it does not hold universally. Some nasty tradeoffs are real and policy makers know this. I can probably drudge up more stuff from a law and econ course I took a while back if needed. Maybe a separate thread. This is very subtle stuff.
 
  • #211
StoneTemplePython said:
A much stricter question is whether it is criminally negligent in instructing a 'driver' of a quasi-automonomous vehicle to not look at the road.

I think the case can be made that the entire testing protocol is criminal. In Arizona, one can be charged with up to Murder 2 if "Under circumstances manifesting extreme indifference to human life, a person recklessly engages in conduct which creates a grave risk of death and thereby causes the death of another person."

I hope we can agree that if Uber calculated that the risk of killing someone over the course of their testing was 99%, that is criminal behavior. I also hope that we can agree that if they didn't calculate this risk, began testing anyway, and killed someone, this is also criminal behavior. Given the facts, it is hard to believe that they calculated the risk: it's hard to believe that disabling safety systems and giving the "safety driver" tasks that would preclude being a safety driver could run an acceptable risk.

Waymo is much more forthcoming about their safety data, and can make a compelling case that they have correctly evaluated the risks and that they are acceptable. Uber has not been as forthcoming. Oh, and they plan to resume testing this summer - just not in Arizona.
 
  • Like
Likes russ_watters and Bystander
  • #212
Vanadium 50 said:
Waymo is much more forthcoming about their safety data, and can make a compelling case that they have correctly evaluated the risks and that they are acceptable. Uber has not been as forthcoming. Oh, and they plan to resume testing this summer - just not in Arizona.

You are closer to the facts than me on this. It sounds about right -- and concerning.
Vanadium 50 said:
I hope we can agree that if Uber calculated that the risk of killing someone over the course of their testing was 99%, that is criminal behavior. I also hope that we can agree that if they didn't calculate this risk, began testing anyway, and killed someone, this is also criminal behavior. Given the facts, it is hard to believe that they calculated the risk: it's hard to believe that disabling safety systems and giving the "safety driver" tasks that would preclude being a safety driver could run an acceptable risk.

Well here's something that may not have been said before:

your point requires more thought on my end but unfortunately I need to go bowling now. (And watch Rockets game.)

I'll revert tomorrow.
 
  • #213
StoneTemplePython said:
unfortunately I need to go bowling now. (And watch Rockets game.)
Gutterballs on both, sorry. :smile:
 
  • #214
russ_watters said:
In this incident, the accident occurred primarily because the driver wasn't watching the road.
No driver watches the road 100% of the time, so labelling what she was doing a distraction then necessarily implicates each and every other driver as being distracted when they read road signs, check the instrument panel, and do all sort of other things such as talking to passengers or adjusting the climate control. The NYSB consideration that the driver was not watching the road at the time of the incident from the video, and verbal evidence, does not lead to a conclusion that the driver was a primary mover towards the incident, or that the incident could have been avoided had the driver been attentive towards the roadway, in which case she may have decided to not glance left at the moments before the incident.
 
  • #215
russ_watters said:
they are labeled "distracted" because they aren't looking at the road
Distracted from driving which entails some more functions other than looking at the road.

Vanadium 50 said:
Uber has not been as forthcoming
Such is the corporate culture at Uber, or at least they have acquired that reputation of attempting to resist and/or circumvent local laws in the endeavour with the taxi service.
 
  • #216
Vanadium 50 said:
I hope we can agree that if Uber calculated that the risk of killing someone over the course of their testing was 99%, that is criminal behavior.
I disagree. Every car manufacturer knows that over the course of their product lifetime one of their cars will be involved in a fatal accident with basically 100% probability. Does it make producing these cars criminal behavior? Clearly not. But we can make an even stronger statement: There are cars that do not have an automatic braking capability. I'm sure you can calculate that including such a feature will save at least one life with near certainty. Does it make producing cars without automated braking capability a crime? I doubt that.

I see criminal behavior if the risk to kill someone is significantly increased compared to driving normally. A safety driver who mainly looks at a screen combined with a flawed software* might lead to such a situation - but we don't know that. How many fatal accidents would human drivers have made over the course of Uber's test? Do you know that number? I don't.

*the car recognized an object on the street 6 seconds in advance. More than enough time to slow down, especially if the object cannot be identified clearly. The disabled emergency braking was not necessary until 1.3 seconds before the impact.
Vanadium 50 said:
I also hope that we can agree that if they didn't calculate this risk, began testing anyway, and killed someone, this is also criminal behavior.
Do you calculate the risk to kill someone every time you drive with a car? If not, would that automatically mean you behaved criminally if you get involved in a fatal accident?
 
  • Like
Likes StoneTemplePython
  • #217
mfb said:
Do you calculate the risk to kill someone every time you drive with a car? If not, would that automatically mean you behaved criminally if you get involved in a fatal accident?

Now, but the government(s) who issued me my drivers license(s) has. They have evaluated my driving and driving history and determined under specified circumstances the risk is legally acceptable. If I deviate from these circumstances - e.g. turn my headlights off at night - and kill someone, you bet you I can be charged with criminal behavior.

mfb said:
I disagree. Every car manufacturer knows that over the course of their product lifetime one of their cars will be involved in a fatal accident with basically 100% probability. Does it make producing these cars criminal behavior? Clearly not.

I disagree, and the courts are on my side. See State of Indiana v. Ford (1980), where Ford Motor Company was charged criminally because of a cost/benefit analysis they did, leading to the determination that it was cheaper to settle with the estimated 360 victims and their family than to spend the $11 per vehicle to prevent these injuries. It is true that Ford was found not guilty, and in the US jurors do not have to explain their reasoning, but post-trial interviews suggest that they were swayed by Ford's attempts to recall the vehicles before the collision in question and defense claims that this particular accident was unsurvivable in any vehicle.

Under Indiana, Uber can absolutely be charged. Will they be convicted? Nobody can predict a jury, but the elements that apparently led to a not guilty verdict there seem to be absent here.
 
  • #218
Vanadium 50 said:
Now, but the government(s) who issued me my drivers license(s) has. They have evaluated my driving and driving history and determined under specified circumstances the risk is legally acceptable. If I deviate from these circumstances - e.g. turn my headlights off at night - and kill someone, you bet you I can be charged with criminal behavior.
Did Uber violate government regulations?
Vanadium 50 said:
I disagree, and the courts are on my side. See State of Indiana v. Ford (1980), where Ford Motor Company was charged criminally because of a cost/benefit analysis they did, leading to the determination that it was cheaper to settle with the estimated 360 victims and their family than to spend the $11 per vehicle to prevent these injuries. It is true that Ford was found not guilty, and in the US jurors do not have to explain their reasoning, but post-trial interviews suggest that they were swayed by Ford's attempts to recall the vehicles before the collision in question and defense claims that this particular accident was unsurvivable in any vehicle.
Even if Ford would have lost (and it didn't), I don't see how this would be relevant here. Cars kill people. The more cars you make the more people will be killed by these cars. That is an obvious truth. "There is a high chance it will be involved in at least one fatal accident" on its own is not argument on its own. You have to consider the relative rate.
 
  • Like
Likes StoneTemplePython
  • #219
mfb said:
relative rate.
Damned high.
 
  • #220
Vanadium 50 said:
I hope we can agree that if Uber calculated that the risk of killing someone over the course of their testing was 99%, that is criminal behavior. I also hope that we can agree that if they didn't calculate this risk, began testing anyway, and killed someone, this is also criminal behavior. Given the facts, it is hard to believe that they calculated the risk: it's hard to believe that disabling safety systems and giving the "safety driver" tasks that would preclude being a safety driver could run an acceptable risk.

I was thinking about this a bit yesterday. I believe @mfb spotted the flaw here.

I find the statement troubling on basic qualitative grounds. For a sufficiently large amount of time / trials, we can get the probability of a fatality arbitrarily close to 1 assuming (loose) independence and a probability of failure in the kth 'trial' given by ##p_k## where all failure probabilities have a hard bound greater than zero. (The conventional case is to assume iid trials and simplifies things a lot).

Put differently, your statement here is scale invariant (with respect to amount of 'trials' or 'car hours' or 'car miles'), so it can't be meaningful. Working backwards from here we need a ruler to measure 'too dangerous' or reckless by, and that is given by probabilities of regular people causing fatalities 'on accident' while driving (per trial or 'car hour' or 'car miles').

The public may have different tolerances for fatalities by self driving cars vs people driving cars. All the more reason to get the rules, guidelines, safe harbor provisions, etc. on the books via the legislative process.
 
Last edited:
  • #221
Another interesting point is the question how to evaluate the risk of self-driving cars without self-driving cars on actual roads. Sure, you can simulate millions of scenarios, but so far it looks like the accidents come from the unknown unknowns - scenarios not expected in the programming, and correspondingly scenarios a simulation could miss.
I don't think that is a reason to ban all tests. When someone gets a new driver's license the government doesn't have an accurate estimate for the risk from this driver either. The tests done with driverless systems are much more thorough than a 30 minutes test drive.
 
  • #222
mfb said:
You have to consider the relative rate.
Even if the relative rate will be lower for self-driving cars, I think there is a major flaw there what will drive people crazy against such vehicles: there isn't a soul there to blame if something goes wrong.

As I see, to find somebody to blame is quite frequently more important than the lower probability of error. People just don't goes well with this kind of helplessness.
 
  • #223
We have many types of accidents that are not clearly the fault of some person. I don't think one more type will be a big issue.
 
  • #224
Rive said:
Even if the relative rate will be lower for self-driving cars, I think there is a major flaw there what will drive people crazy against such vehicles: there isn't a soul there to blame if something goes wrong.

Once the bugs are worked out insurance companies will probably lower your rates if you own a self driving car?
 
  • #225
CWatters said:
Nsaspook... I think that video answers my question.

Looks like it just followed the brightest white line.

Another accident where auto-pilot seems to have followed the white line.
https://twitter.com/LBPD_PIO_45/sta...shes-into-laguna-beach-police-patrol-vehicle/

DeY-CmCU8AACMSR.jpg


DeYwq9aV0AA-0kV.jpg
 

Attachments

  • DeY-CmCU8AACMSR.jpg
    DeY-CmCU8AACMSR.jpg
    67.5 KB · Views: 356
  • DeYwq9aV0AA-0kV.jpg
    DeYwq9aV0AA-0kV.jpg
    58.6 KB · Views: 353
  • #226
Oops!
 
  • #227
StoneTemplePython said:
If you look at the design and analysis of speed limits for instance, you'll see that policy makers know that more people will die with higher speed limits.
This is not in the same class of risk as what we are discussing, for exactly the reason you underlined: it's a known risk. Every driver, every other driver, every pedestrian, politician, automotive engineer and automotive executive knows that excessive speed causes accidents. And every one of them can take assertive steps to mitigate that risk or choose to accept it as is.

Self-driving cars present *unknown* risks. In many cases nobody knows what the risks are. In this particular incident, however, there was a clear risk that should have been addressed by Uber. But failing that, there was absolutely no way for this particular pedestrian to have known that this particular car/driver carried a vastly higher risk of hitting her than an average car/driver. And while I agree with @Vanadium 50 that a detailed risk analysis should carried out for these cars/tests (if such testing is even allowed), this particular situation is far too obvious for it to have even been reasonable to get that far. This specific risk is literally the first two rules of safe driving on the first link I clicked looking for such rules:
https://www.nationwide.com/driving-safety-tips.jsp
  • Keep 100% of your attention on driving at all times – no multi-tasking.
  • Don’t use your phone or any other electronic device while driving.
And just to circle back to your example, the third on the list:
  • Slow down. Speeding gives you less time to react and increases the severity of an accident.
So again; it is in my opinion totally unreasonable - and therefore in my opinion grossly negligent - that Uber did not take appropriate steps to mitigate this risk. And it is even worse for being a risk that Uber forced on an unknowing public.

And really, this should not be debatable. I can't fathom that someone would think it should be acceptable for Uber to be violating basic safe driving rules - and again, it disturbs me that I think I'm seeing people arguing that what Uber did is acceptable. And on just how bad it is, we don't have to argue the minutiae of levels of negligence: people *do* get arrested and charged with forms of homicide for killing people when driving while distracted. It's a real thing.
 
  • Like
Likes Vanadium 50 and berkeman
  • #228
256bits said:
No driver watches the road 100% of the time, so labelling what she was doing a distraction then necessarily implicates each and every other driver as being distracted when they read road signs, check the instrument panel, and do all sort of other things such as talking to passengers or adjusting the climate control.
I'll be more explicit this time: did you read the report? Because this response is just plain absurd. This driver was "distracted" by definition. What the driver was doing is explicitly described as a distracted driving action.
 
  • #229
mfb said:
I disagree. Every car manufacturer knows that over the course of their product lifetime one of their cars will be involved in a fatal accident with basically 100% probability. Does it make producing these cars criminal behavior? Clearly not. But we can make an even stronger statement: There are cars that do not have an automatic braking capability. I'm sure you can calculate that including such a feature will save at least one life with near certainty. Does it make producing cars without automated braking capability a crime? I doubt that.
Like @256bits you aren't dealing with the scenario for what it was. You are not describing what actually happened. What Uber did is *explicitly* illegal in most jurisdictions; in Arizona, it just happens to be not on the books as an explicit law, so it is just explicitly against normal safe driving practice.
I see criminal behavior if the risk to kill someone is significantly increased compared to driving normally. A safety driver who mainly looks at a screen combined with a flawed software* might lead to such a situation - but we don't know that.
What do you mean? How do we not know it? It's explicitly illegal in most jurisdictions and it's rules #1 & 2 of safe driving! Why? Because it is a significantly higher risk. How much higher? We could probably calculate it, but I'll guess for a start that it's 20,000% riskier. That's based on average drivers being able to avoid this accident 999 times out of 1000 and based on the fraction of time this driver was looking away from the road, this driver would only have been able to avoid this accident 4 times out of 5.
How many fatal accidents would human drivers have made over the course of Uber's test? Do you know that number? I don't.
Broad/overall statistics have nothing to do with crimes. You can't argue your way out of a speeding ticket by claiming that most of the time you don't speed, even if it is true!
Do you calculate the risk to kill someone every time you drive with a car? If not, would that automatically mean you behaved criminally if you get involved in a fatal accident?
That just isn't how it works and you must know this. Rules exist because other people have calculated the risks for us drivers and determined what is and isn't acceptable risks. You can't get out of a DUI by saying; "It's ok officer; I've tested myself and I still drive acceptably safely with a 0.85 BAC."
Cars kill people. The more cars you make the more people will be killed by these cars. That is an obvious truth. "There is a high chance it will be involved in at least one fatal accident" on its own is not argument on its own. You have to consider the relative rate.
While you put that in quotes, nobody arguing in favor of criminal charges is making that argument; it's a straw man. Maybe you got that from V50's statement that Uber should be doing risk analysis, but this particular case goes beyond risk analysis: Uber engaged in known unsafe and typically illegal behavior. And someone died because of it.
 
Last edited:
  • Like
Likes Vanadium 50 and berkeman
  • #230
mfb said:
We have many types of accidents that are not clearly the fault of some person. I don't think one more type will be a big issue.
What we have here is - so far - about half a dozen new types that are clearly the fault of some people.

However, the specific case we are discussing doesn't even have anything directly to do with self-driving cars. At its core, what we have is a driver who hit a pedestrian because she was looking at a computer when she should have been braking.
 
  • #231
Here are several cases of charges or convictions for manslaughter or similar offenses due to distracted driving:
http://distracteddriveraccidents.com/distracted-driver-gets-manslaughter/
https://abcnews.go.com/US/story?id=93561&page=1
http://www.kctv5.com/story/17587020/teen-charged-with-manslaughter-in-texting-while-driving-case
http://www.foxnews.com/us/2017/08/03/woman-indicted-in-deadly-texting-and-driving-case.html

In some of these cases the distracted act is explicitly illegal and in some cases it is not (all of these involved talking or texting). Based on what I'm seeing, including legal advice such as the below, such charges are pretty much the standard outcome of similar cases:
https://www.pittsburghcriminalattorney.com/can-texting-driving-lead-murder-charge/
"...most cases of death due to distracted driving would be classified as homicide by vehicle..."
 
  • #232
On the TV news just now...

"...accident involved the self-driving mode of the Tesla vehicle..."

http://abc7news.com/tesla-on-autopilot-crashes-into-parked-police-car/3539142/

"Tesla emphasizes that the driver should keep their hands on the steering wheel at all times and pay attention to the road..."

Self driving?

(EDIT -- the quoted text was from the TV news report, not necessarily from the linked news report)
 
  • #233
@russ_watters: What exactly makes you so sure about the higher risk - so sure you put some huge number on it?
Maybe the risk was lower and it was just bad luck? Maybe the risk was lower and it was not bad luck - human drivers would have killed more than one person during the test?

If you are so sure that the risk was higher you must have access to some source I do not. Where did you get that risk estimate from? What is the probability that the car does not correctly brake in such a situation? What is the probability for an average human driver?
You claimed 99.9% for the human driver. Okay, let's go with that. Now we need an estimate for the self-driving car. If you say the risk is 20,000% higher and the human prevents 4 of 5 accidents you must claim the car would never brake for pedestrians. That claim is blatantly absurd. It does not make an emergency braking, but no emergency braking was necessary here for several seconds.

russ_watters said:
I can't fathom that someone would think it should be acceptable for Uber to be violating basic safe driving rules - and again, it disturbs me that I think I'm seeing people arguing that what Uber did is acceptable.
If you are a passenger in a car you are not required to pay attention 100% of the time either. I think that is the best analogy here. The car was driving, and the car did pay attention - it just did the wrong action for reasons that are investigated. So do humans once in a while.
I don't think what Uber did was good, Uber should use *two* entities paying attention (car and passenger on driver seat), but I don't think your arguments do anything to show that.
russ_watters said:
It's explicitly illegal in most jurisdictions and it's rules #1 & 2 of safe driving!
It is illegal if you are the driver. That's the point of driverless cars: You are not the driver any more.
Also, "it is illegal at some other place in the world" doesn't make an action illegal.
russ_watters said:
Cars kill people. The more cars you make the more people will be killed by these cars. That is an obvious truth. "There is a high chance it will be involved in at least one fatal accident" on its own is not argument on its own. You have to consider the relative rate.
While you put that in quotes, nobody arguing in favor of criminal charges is making that argument; it's a straw man. Maybe you got that from V50's statement that Uber should be doing risk analysis, but this particular case goes beyond risk analysis: Uber engaged in known unsafe and typically illegal behavior. And someone died because of it.
Vanadium50 was making that argument. Here is the exact statement:
Vanadium 50 said:
I hope we can agree that if Uber calculated that the risk of killing someone over the course of their testing was 99%, that is criminal behavior.
berkeman said:
Self driving?
Teslas are not self-driving.
 
  • Like
Likes StoneTemplePython
  • #234
russ_watters said:
This is not in the same class of risk as what we are discussing, for exactly the reason you underlined: it's a known risk. Every driver, every other driver, every pedestrian, politician, automotive engineer and automotive executive knows that excessive speed causes accidents. And every one of them can take assertive steps to mitigate that risk or choose to accept it as is.

Self-driving cars present *unknown* risks... In many cases nobody knows what the risks are. In this particular incident, however, there was a clear risk that should have been addressed by Uber. But failing that, there was absolutely no way for this particular pedestrian to have known that this particular car/driver carried a vastly higher risk of hitting her than an average car/driver...

Sorry amigo, but this feels like a framing problem, and awfully close to the overly literal perfect information argument that you occasionally hear from (bad) economists, that I didn't think you'd buy into...?

It also doesn't hold water. Suppose for a contradiction that I actually like this argument:

I may think, for example, pedestrians in big cities with dangerously high of speed limits may mitigate mortality risks by moving elsewhere, refusing to walk (or even drive as speed limts are too high), etc. or "choose to accept it as is". It's all their choice right? If the individual doesn't agree, he/she could certainly not accept it and move to a different city or state or country.

In this case, same thing: move to one of the numerous places that is known to not allow self driving cars. Hence there is a contradiction because the public knows whether self driving cars are allowed in their city, and if not specifically about their city, then they know whether self driving cars exist in their state, and if not in their state, then in their country...
russ_watters said:
So again; it is in my opinion totally unreasonable - and therefore in my opinion grossly negligent... And on just how bad it is, we don't have to argue the minutiae of levels of negligence: people *do* get arrested and charged with forms of homicide for killing people when driving while distracted. It's a real thing.

Look, I'm not interested in getting deep into the details of the specific crash. From what I've read from you and Vanadium, I'm not happy about it. My point is that there is a massive gap between almost all forms of negligence and criminal negligence, esp. criminally negligent homicide. I've been on more than a couple conference calls about criminal negligence with very expensive lawyers re: one of the largest corp disasters over last 30 years. And yes there was a body count greater than this one and yes in the US. It's a very high bar to clear. I'm not saying the hurdle is never surmounted. But I don't think you understand the legal issues here and are wildly overconfident that it exceeds hurdle required for criminal negligence. You have strong opinion on a legal matter, but you don't have a legal opinion... (or anything particularly 'close')
 
  • #235
russ_watters said:
While you put that in quotes, nobody arguing in favor of criminal charges is making that argument; it's a straw man. Maybe you got that from V50's statement that Uber should be doing risk analysis, but this particular case goes beyond risk analysis: Uber engaged in known unsafe and typically illegal behavior. And someone died because of it.
If one looks at the news there are many instances of a product that has caused harm to consumers and no one is charged with criminal intent.
Such as choking hazards for toys for children, contaminated lettuce, saws, ladders, electrocution. Pretty much everything that humans manufacture, build, produce, cultivate, package, service and sell has a casualty risk factor associated with it, either through regular use, defects and/or improper use.
In the natural world, casualties and harm from natural effects are Acts of God, and there is no one to charge or sue for the misfortune.
Once a human has 'touched' a product in any which way or form, I agree, there is an assumed responsibility associated with how much human manipulation has gone into the product.
Liability is not criminal if there is no intent to knowingly harm a fellow human, or their possessions.
Liability does incur if the product does harm a fellow human or their possessions.
That is where we disagree.
I think the owner of the car should be sued for an unforeseen defect in the vehicle causing harm to a fellow human, but disagree that there was criminal intent.
I say sue the pants off them for the defect. Fix it and make it better.

( I can't wait to when the cars designated as self-driving, which they are not, are put on the market as a taxi service with the same? defect with no human supervision. Had this accident not happened in the early ( middle, late ? ) stages of testing, we would be seeing cars driving around without a passenger or driver, being promoted as being completely safe. Will we still see that? )
 
  • #236
PS.
If this car does go on market with the same defect, and labeled self-driving, then I will completely concur - criminal intent.
Same defect meaning only slight tweaking and not a major re-adjustment of the AI code and sensor interaction.
 
  • #237
mfb said:
Teslas are not self-driving.
Sorry, I'm missing the distinction. What is the difference between self-driving and autopilot?
 
  • #238
mfb said:
What exactly makes you so sure about the higher risk

Cars kill on average 12.5 people per billion miles driven. Uber self-driving cars kill on average 500 people per billion miles driven. You can reject that hypothesis that Uber cars are no less safe than human driven cars to >95% on the numbers we have.

For non-fatal accidents the rate is 6 per million miles driven. Waymo has 25 accidents in 5 million miles total - but if you look at the last three million miles, they have only had one: a rate that is 18x safer.

We can argue about small statistics, but the fact that in one case they are seeing a rate 40x higher and the other 18x lower says something. Uber had a product that is, to the best of our knowledge 720x more dangerous than Waymo's (and yes, 720 might be 500 or 1000), and because they wanted to beat Waymo in the market, tested it on an unsuspecting populace, and sure enough, they killed someone.
 
  • Like
Likes atyy
  • #239
As a PS being legally drunk increases your odds of an accident by a factor of 4-5, depending on what legally drunk is in your jurisdiction. Compare that to a factor of 18.
 
  • #240
In 1921 first year this statistics may have been available there were 21 fatalities per 100 M vehicle miles vs 2016 with 1.18 fatalities per 100 M miles. Is it fair to compare AV in the early stages of development and public familiarity with vehicles that have been in the public domain for almost a century.

While Uber made a mistake in allowing that vehicle on the road how do you compare that with current auto manufacturers who allow defective car to be used while fatalities accumulate. One manufacturer was suppose have had the policy that it was cheaper to settle law suites than repair (recall) the cars affected.
 
  • #241
gleem said:
In 1921 first year this statistics may have been available there were 21 fatalities per 100 M vehicle miles vs 2016 with 1.18 fatalities per 100 M miles. Is it fair to compare AV in the early stages of development and public familiarity with vehicles that have been in the public domain for almost a century.

Uber presently has a rate 2.4x higher than that.

gleem said:
One manufacturer was suppose have had the policy that it was cheaper to settle law suites than repair (recall) the cars affected.

Yes, that was Ford, and they were charged criminally for that, and I was the one who brought it up.
 
  • #242
As a pedestrian in Arizona you are more likely to be killed than any other state as a percentage of the population, 1.61 fatalities/100 K vs a national average of 0.81. 75% occur in the dark. The four most populous stated NY, CA, TX, FL have the most fatalities but AZ ranked 16 in pop comes in next in fatalities. Together these 5 states account for 43% of national pedestrian fatalities. PA the fifth most populous state only has 0.49 fatalities/100K. The most fatalities in a county in the US is in Maracopa Co. AZ which is where Tempe is. Uber could have reduced their exposure to untoward incidents by not testing their cars in AZ.

Govenors Highway Safety Report https://www.ghsa.org/sites/default/files/2018-02/pedestrians18.pdf
 
  • Like
Likes atyy
  • #243
gleem said:
Uber could have reduced their exposure to untoward incidents by not testing their cars in AZ.

And yet they chose not to.
 
  • #244
Over coarse on the other hand if they had carried out their tests in Hawaii this accident would have accounted for 50% of Hawaii fatalities this year instead of just 0.9%.in Arizona.
 
  • Like
Likes atyy
  • #245
berkeman said:
Sorry, I'm missing the distinction. What is the difference between self-driving and autopilot?
A self-driving car is a car that doesn't need a human driver. The Tesla autopilot controls the speed and steering in some conditions but cannot handle all traffic situations, hence the need for the human to pay attention the whole time. Tesla cars tell you that clearly before you can use the autopilot. If you don't pay attention as driver (!) it is your fault.
Vanadium 50 said:
Cars kill on average 12.5 people per billion miles driven. Uber self-driving cars kill on average 500 people per billion miles driven. You can reject that hypothesis that Uber cars are no less safe than human driven cars to >95% on the numbers we have.

For non-fatal accidents the rate is 6 per million miles driven. Waymo has 25 accidents in 5 million miles total - but if you look at the last three million miles, they have only had one: a rate that is 18x safer.

We can argue about small statistics, but the fact that in one case they are seeing a rate 40x higher and the other 18x lower says something. Uber had a product that is, to the best of our knowledge 720x more dangerous than Waymo's (and yes, 720 might be 500 or 1000), and because they wanted to beat Waymo in the market, tested it on an unsuspecting populace, and sure enough, they killed someone.
Numbers! Thanks.
Uber had 2 million miles driven by the end of 2017. At the same number of miles Waymo had 24 non-fatal accidents, or 12 times higher than the general rate. If they had the same higher risk for fatal accidents (150 per billion miles) they had a 30% chance of a fatal accident within these 2 million miles. Maybe they were just luckier. You know how problematic it is to draw statistical conclusions based on a single event or the absence of it.

I see two clear conclusions based on the numbers here:
* Waymo reduced its nonfatal accident rate over time, and it is now below the rate of human drivers
* The ratios "Uber fatal accident rate to human fatal accident rate in the first 2 million miles" and "Waymo nonfatal accident rate to human nonfatal accident rate in miles 2 million to 5 million" are significantly different. I'm not sure how much that comparison tells us.

Another thing to note: To demonstrate a lower rate of fatal accidents, Waymo will have to drive ~250 million miles, 50 times their current dataset, assuming no fatal accident.

All this assumes the driving profiles for the cars and for humans are not too different. If the cars drive more or less frequent in the dark, more or less frequent on the highway (lower accident rates but more accidents are fatal) or similar the numbers might change.
 
  • #246
Comparing apples to oranges.
According to Virginia tech, self driving cars crash rate is less than national average, per miles driven.
https://www.vtti.vt.edu/featured/?p=422
figure1.png

When compared to national crash rate estimates that control for unreported crashes (4.2 per million miles), the crash rates for the Self-Driving Car operating in autonomous mode when adjusted for crash severity (3.2 per million miles; Level 1 and Level 2 crashes) are lower. These findings reverse an initial assumption that the national crash rate (1.9 per million miles) would be lower than the Self-Driving Car crash rate in autonomous mode (8.7 per million miles) as they do not control for severity of crash or reporting requirements. Additionally, the observed crash rates in the SHRP 2 NDS, at all levels of severity, were higher than the Self-Driving Car rates. Estimated crash rates from SHRP 2 (age-adjusted) and Self-Driving Car are displayed in Figure 1.
 

Attachments

  • figure1.png
    figure1.png
    9.2 KB · Views: 615
  • Like
Likes mfb
  • #247
These rates are for Google's cars aka Waymo. As rough guideline: Level 1 are serious crashes (involving airbags, injuries, require towing, ...). Level 2 accidents have property damage only, level 3 means no/minimal damage.
If we combine the first two we get about 6 per million miles for human drivers and 3 per million miles for Google (but only with 1.3 million miles driven - so just 4 accidents in total).
 
  • #248
mfb said:
These rates are for Google's cars aka Waymo. As rough guideline: Level 1 are serious crashes (involving airbags, injuries, require towing, ...). Level 2 accidents have property damage only, level 3 means no/minimal damage.
If we combine the first two we get about 6 per million miles for human drivers and 3 per million miles for Google (but only with 1.3 million miles driven - so just 4 accidents in total).

If those are Waymo only, there is a huge difference in the approach Waymo is taking and the approach Tesla is taking.

For example, Tesla has no Lidar, something that Waymo considers a necessity.
 
  • #249
Interesting details but I think we all know the root issue here is human decision making on all sides.

I don't know the level of misclassification of objects but it must have been serious for the system to be disabled even for human sized object detection at close range. Too many false positives or even emergency braking when the impact danger is minor compared to possible reactions to the stop (small animal in the road with a semi behind you) can also be dangerous so it should have been a top priority fix for Uber. Waymo seems to have designed systems that handle most cases.

This accident was caused by humans on all sides. The car detected the illegally jaywalking intoxicated human (as an unknown object) in the dark 6 seconds before impact and would have triggered emergency braking almost two seconds before hitting her. Humans made the decision to disable automatic braking instead of fixing the classification problem first, humans decided to put a single 'safety' driver in the car with a impossible job and that human failed as expected from every study in human reaction time to monitor the need for emergency braking while not actively driving. The least responsible entity in the accident IMO is the car, computers and software.
 
Last edited:
  • Like
Likes berkeman
  • #250
The car detected the object and didn't slow down. What exactly did the car expect the object to do? Magically disappear? Turn around and leave the street? While the latter is a likely outcome, it is not guaranteed, and every responsible driver would prepare to slow down as soon as they see the object, and slow down long before an emergency braking is necessary.
NTL2009 said:
If those are Waymo only, there is a huge difference in the approach Waymo is taking and the approach Tesla is taking.

For example, Tesla has no Lidar, something that Waymo considers a necessity.
Tesla's approach is different in several aspects.
- No Lidar
- Build the hardware into a big number of cars
- Start with a driving assistant, but make it available to the public. Collect data from all the drivers using it.

Their software doesn't do as much as Waymo at the moment, although we don't know how much it could do. But they have tons of traffic data even for extremely rare traffic conditions - exactly these unknown unknowns where Uber and Waymo never encountered many of them.
 

Similar threads

Replies
123
Views
11K
Replies
19
Views
11K
Replies
1
Views
10K
Replies
13
Views
3K
Back
Top