How Safe Are Self-Driving Cars After the First Fatal Accident?

  • Thread starter Thread starter Dr. Courtney
  • Start date Start date
  • Tags Tags
    Car Self
AI Thread Summary
A self-driving Uber vehicle struck and killed a pedestrian in Arizona, marking the first fatal incident involving autonomous cars. The event raises significant concerns about the safety and readiness of self-driving technology, especially given the limited number of vehicles in operation. Discussions highlight the potential for engineers to analyze the incident thoroughly, which could lead to improvements across all autonomous vehicles. There are debates about the legal implications of the accident, particularly regarding the accountability of the vehicle's operator and the technology itself. Ultimately, the incident underscores the complexities of integrating self-driving cars into public spaces and the necessity for rigorous safety standards.
  • #151
BillTre said:
Big deal.
A friend of mine said I drive like Mario Andretti!
Was his name "Karl"?
If so, I know him.

my Karl: Bus rider. Not used to being in a car.
 
Physics news on Phys.org
  • #152
OmCheeto said:
Was his name "Karl"?
If so, I know him.

my Karl: Bus rider. Not used to being in a car.
Mani, guy from India.
Enjoyed my driving.
 
  • Like
Likes OmCheeto
  • #153
I heard on the news report this morning driving to work that the Uber car was not in self-driving mode at the time of the crash, but I haven't been able to find any reference to that now that I'm at work on my PC. Has anybody else seen anything about this? It was CBS news radio...
 
  • #154
berkeman said:
I heard on the news report this morning driving to work that the Uber car was not in self-driving mode at the time of the crash

If that came from Uber, that's the same company who blamed (and fired) the safety drivers for running red lights in San Francisco, when in fact it was their software.
 
  • #155
berkeman said:
I heard on the news report this morning driving to work that the Uber car was not in self-driving mode at the time of the crash, but I haven't been able to find any reference to that now that I'm at work on my PC. Has anybody else seen anything about this? It was CBS news radio...

I've seen nothing about it but is anyone really surprised that the 'safety' driver was not 100% on visual task? I'm pretty shocked by the amount of abuse this person has been getting online and in the media about their past history and employment by Uber. We are good drivers when we’re vigilant. But we’re terrible at being vigilant.
http://journals.sagepub.com/doi/pdf/10.1080/17470214808416738
The General Problem. The deterioration in human performance resulting from adverse working conditions has naturally been one of the most widely studied of all psychological problems. Amongst other possibilities, the stress arising from an unusual environment may be due either to physico-chemical abnormalities in the surroundings or to an undue prolongation of the task itself. This paper is concerned with the latter form of stress, as it has been found to occur in one particular type of visual situation; a later publication will more fully discuss the implications of these and other visual and auditory experiments (Nlackworth, 1948)
 
  • #156
If the problem is the result a LiDAR malfunction remember why this might have happened to Uber.

https://medium.com/waymo/a-note-on-our-lawsuit-against-otto-and-uber-86f4f98902a1
One of the most powerful parts of our self-driving technology is our custom-built LiDAR — or “Light Detection and Ranging.” LiDAR works by bouncing millions of laser beams off surrounding objects and measuring how long it takes for the light to reflect, painting a 3D picture of the world. LiDAR is critical to detecting and measuring the shape, speed and movement of objects like cyclists, vehicles and pedestrians.

Hundreds of Waymo engineers have spent thousands of hours, and our company has invested millions of dollars to design a highly specialized and unique LiDAR system. Waymo engineers have driven down the cost of LiDAR dramatically even as we’ve improved the quality and reliability of its performance. The configuration and specifications of our LiDAR sensors are unique to Waymo. Misappropriating this technology is akin to stealing a secret recipe from a beverage company.

https://www.theregister.co.uk/2018/02/09/waymo_uber_settlement/
We have reached an agreement with Uber that we believe will protect Waymo’s intellectual property now and into the future. We are committed to working with Uber to make sure that each company develops its own technology. This includes an agreement to ensure that any Waymo confidential information is not being incorporated in Uber Advanced Technologies Group hardware and software. We have always believed competition should be fueled by innovation in the labs and on the roads and we look forward to bringing fully self-driving cars to the world.

https://www.reuters.com/article/us-...er-self-driving-incident-safely-idUSKBN1H1006
LAS VEGAS (Reuters) - The head of Alphabet Inc’s autonomous driving unit, Waymo, said on Saturday that the company’s technology would have safely handled the situation confronting an Uber self-driving vehicle last week when it struck a pedestrian, killing her.
 
Last edited:
  • Like
Likes Spinnor
  • #157
HAYAO said:
My condolences to the family who lost one of their important members.

Or simply an unavoidable accident (for example the pedestrian tripped over something and fell on the road)?

She didn't tripped. I saw the dash cam footage. She came out from shadows. The car didn't even try to brake as it seemed, so clearly the car didn't identify the lady. Driver was looking down somewhere not watching ahead so he missed it too. But the way it happened, I mean it happened very quickly. As I saw in the video, it seemed that even if the driver saw her and immediately braked, he would not have been able to stop the car before it hit her because the car was traveling too fast for such a short distance stop. BUT, that impact would've been lower which might have saved her life with severe injuries. But the driver was not attentive and car didn't see her so no braking happened before the car hit the lady it seemed.
 
  • #158
https://www.theverge.com/2018/3/27/17168606/nvidia-suspends-self-driving-test-uber
Uber has been using Nvidia’s self-driving technology in its autonomous test cars for a while, though the companies only just started to talk about it earlier this year. Uber has said it would use Nvidia’s tech in its eventual self-driving fleets of Volvos as well as the company’s autonomous trucks. But Uber has also halted its AV testing in all the cities in which it operates, and the governor of Arizona suspended the company from testing its self-driving cars in the state “indefinitely.”

https://www.engadget.com/2018/03/27/nvidia-self-driving-virtual-simulation/
 
  • #161
Grands said:
That's kind of off-topic for this thread. This thread is about the failure of the self-driving car to see the pedestrian that it hit at night. The links you posted are for a completely different accident (that happened about 10 miles from me) where the driver hit a center divider. You can start a separate thread about that accident if you like -- it's not clear that the car was in self-driving mode, IIRC.
 
  • Like
Likes Grands
  • #162
One question I think is what do we mean when we ask how safe self-driving cars are.

I think deaths per mile isn't a very meaningful measure. For example, suppose in the normal use case of day to day driving, a SDC will have an accident at a rate that is 50% less than a human, and let's define accident as any incident in which people or property are damaged. This certainly seems "safe".

However, suppose in the case of rare events, such as a pedestrian's trajectory intersects that of the cars, the SDC has an accident rate that approaches 90% - meaning, in 90% of such rare events, the car will have an accident. And let's assume for humans that this rate is low, comparable to the overall rate of accidents.

If these events are rare enough, then the total accidents per unit mile of an SDC may well be less than that of human drivers. However, if SDCs have known flaws where they will likely fail, such as consistently plowing into pedestrians, then in those particular instances they should be considered extremely unsafe.

Does a SDC need to outperform humans in all possible cases to be safe? If, on occasion, a SDC will plow down pedestrians (including children), but in the most common scenarios perform better than humans, resulting in overall fewer deaths per mile, are they safe or not?
 
  • #163
OmCheeto said:
My younger brother once claimed that I drove like a paranoid schizophrenic
I am surprised that a paranoid schizophrenic would have any chance of passing a driving test.
 
  • #164
rootone said:
I am surprised that a paranoid schizophrenic would have any chance of passing a driving test.

Why? I don't think "inability to parallel park" is listed as a symptom in DSM-5.
 
  • #165
While - a few pages ago I noted that there are 15 pedestrian deaths every day - I was also thinking today that; many cars today have a "breaking override" functionality as part of their safety package where the vehicle applies the brakes on it's own, without driver interaction. This exact technology IS applicable to self-driving - and we are not discussing how many lives have (possibly) been saved by that... I have seen no one promote the data regarding lives saved by this.

Furthermore we have no way of comparing this case to if it had been a human driver...

Taking one (tragic) failure and speculating on the issues based on the video, which we get to stop and analyze, Monday morning quarterdeck style and none of us have direct access to all of the details is counter productive - we are letting our personal opinion and emotion overrule good critical analysis. It falls into populist mindset that results in flat Earth and anti-vaxer movements.

My opinion is based largely on industrial robotics - a MAJOR benefit is Safety - honestly.. humans are AWFUL when it comes to process consistency and reliability - you can not re-train them effectively - they are fatigued, they are distracted, they just do not care... just because YOU think you are better then the average driver... IMO I really do not believe it...

And again - I am NOT for preventing you from driving, mostly because if you are enjoying it your are engaged, but 99% of the driving is just a utilitarian pursuit - get me from point A to point B...
 
  • #166
Vanadium 50 said:
Why? I don't think "inability to parallel park" is listed as a symptom in DSM-5.
Yes but attempts to avoid collisions with imaginary objects would be.
 
  • #167
dipole said:
Does a SDC need to outperform humans in all possible cases to be safe?
Requiring this would kill thousands or tens of thousands of people.
Imagine the opposite direction: If driverless cars would be the default, would we switch to human drivers if they demonstrate to be better at recognizing polar bears on wet roads, but perform worse in all other cases?
 
  • Like
Likes nsaspook
  • #168
mfb said:
Requiring this would kill thousands or tens of thousands of people.
Imagine the opposite direction: If driverless cars would be the default, would we switch to human drivers if they demonstrate to be better at recognizing polar bears on wet roads, but perform worse in all other cases?

Yes, safe is not perfection but what is safe in these cases?
A 'safe' human driver can hit and kill a pedestrian under exactly this conditions and not be charged with a crime or even given a traffic ticket.

https://www.usatoday.com/story/opin...-needs-set-safety-standards-column/460114002/
The risk of premature deployment is well-understood: Public welfare is harmed by the deployment of unsafe technology. However, delayed deployment would deprive society of the improved safety benefits this technology promises. Without guidance, companies will inevitably err both in being too aggressive and deploying prematurely, or in being too cautious and continuing to privately develop the technology even once it is safe enough to benefit the public. Neither scenario is in the public interest.

For the driver safety aspect.
https://www.tesla.com/en_EU/blog/update-last-week’s-accident?redirect=no
In the moments before the collision, which occurred at 9:27 a.m. on Friday, March 23rd, Autopilot was engaged with the adaptive cruise control follow-distance set to minimum. The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision. The driver had about five seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken.
...
Tesla Autopilot does not prevent all accidents – such a standard would be impossible – but it makes them much less likely to occur. It unequivocally makes the world safer for the vehicle occupants, pedestrians and cyclists.
 
  • #169
berkeman said:
I heard on the news report this morning driving to work that the Uber car was not in self-driving mode at the time of the crash, but I haven't been able to find any reference to that now that I'm at work on my PC. Has anybody else seen anything about this? It was CBS news radio...

Maybe this is what you heard. It's logical that the embedded Volvo XC90’s driver-assistance system would be disabled to have full control during testing.
https://www.bloomberg.com/news/arti...-suv-s-standard-safety-system-before-fatality
“We don’t want people to be confused or think it was a failure of the technology that we supply for Volvo, because that’s not the case,” Aptiv PLC spokesman Zach Peterson told Bloomberg. “The Volvo XC90’s standard advanced driver-assistance system ‘has nothing to do’ with the Uber test vehicle’s autonomous driving system.”
...
Mobileye, which produces the sensor chips in the safety systems supplied to Aptiv PLC, told Bloomberg it tested the software Monday following the crash by watching the footage of the accident. The company said the software “was able to detect Herzberg one second before impact in its internal tests” despite the video’s poor visual quality.
 
  • #170
How do people feel about autonomous cars being deliberately allowed to hit things? I'm not talking about people but things like overhanging tree branches. In parts of the UK there are many single track roads with passing places. It's quite common for cars to brush over hanging branches on such roads. Is this something that's going to be easy for AI to accommodate? What happens when two autonomous car meet on such a road? Anyone testing this or is it all being done in American cities?
 
  • #171
CWatters said:
How do people feel about autonomous cars being deliberately allowed to hit things? I'm not talking about people but things like overhanging tree branches. In parts of the UK there are many single track roads with passing places. It's quite common for cars to brush over hanging branches on such roads. Is this something that's going to be easy for AI to accommodate? What happens when two autonomous car meet on such a road? Anyone testing this or is it all being done in American cities?

That will be tricky for the classifier part of object recognition to integrate in the driving prediction plan like 'seeing' the flying bird or floating trash bag in front of the car. I'd be more worried about how quickly human drivers and pedestrians will bully self-driving cars in traffic to make them give way in just about any situation.

https://spectrum.ieee.org/transport...e-big-problem-with-selfdriving-cars-is-people
It’s not hard to see how this could lead to real contempt for cars with level-4 and level-5 autonomy. It will come from pedestrians and human drivers in urban areas. And people will not be shy about expressing that contempt. In private conversations with me, at least one manufacturer is afraid that human drivers will bully self-driving cars operating with level-2 autonomy, so the engineers are taking care that their level-3 test cars look the same as conventional models.

Bullying can go both ways, of course. The flip side of socially clueless autonomous cars is the owners of such cars taking the opportunity to be antisocial themselves.
MjkyOTIwMg.jpg


One person walking like this could slow traffic to human walking speed because would you risk a possible human collision if you designed the automation to always be safe and never take risks people do everyday while driving in traffic.
 

Attachments

  • MjkyOTIwMg.jpg
    MjkyOTIwMg.jpg
    35.9 KB · Views: 425
  • Like
Likes CWatters
  • #172
+1

In such situations it might be something simple such as making eye contact with the person or a small gesture that makes the difference between you having to waiting or pass.
 
  • #173
CWatters said:
+1

In such situations it might be something simple such as making eye contact with the person or a small gesture that makes the difference between you having to waiting or pass.

I have my own solution for pedestrian bullies.
 
  • #174
I can think of some easy and devious ways to fool driving AI systems. Tricking self-driving cars could become a national pastime, like wearing a cycling jersey with a full sized STOP sign on the back.

https://arxiv.org/pdf/1802.08195.pdf
Machine learning models are vulnerable to adversarial examples: small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich. However, it is still an open question whether humans are prone to similar mistakes. Here, we create the first adversarial examples designed to fool humans, by leveraging recent techniques that transfer adversarial examples from computer vision models with known parameters and architecture to other models with unknown parameters and architecture, and by modifying models to more closely match the initial processing of the human visual system. We find that adversarial examples that strongly transfer across computer vision models influence the classifications made by time-limited human observers.

Now that researchers have found trivial ways to hack deep learned visions system they are turning their attention to humans.
 
  • #175
You can mess with human drivers in many ways as well. Many of these ways are illegal, some ways specific to autonomous cars might become illegal, but overall this is not a big issue today, I don’t see how it would become one in the future.
 
  • #176
mfb said:
You can mess with human drivers in many ways as well. Many of these ways are illegal, some ways specific to autonomous cars might become illegal, but overall this is not a big issue today, I don’t see how it would become one in the future.

There is a inherent danger in messing with human drivers because they can also act irrationally.
I hope so but in a culture where automation has eliminated driving and most low skilled jobs young people will find something to do to kill time instead of the 'Tide Pod' challenge.
 
  • #177
Passengers in driverless vehicles can act irrationally as well. But I don't think that is what stops people from harming traffic deliberately.
 
  • #178
berkeman said:
That's kind of off-topic for this thread. This thread is about the failure of the self-driving car to see the pedestrian that it hit at night. The links you posted are for a completely different accident (that happened about 10 miles from me) where the driver hit a center divider. You can start a separate thread about that accident if you like -- it's not clear that the car was in self-driving mode, IIRC.
The autopilot was active https://www.theguardian.com/technology/2018/mar/31/tesla-car-crash-autopilot-mountain-view, Tesla confirmed, this is the reason I posted here.
 
  • #179
mfb said:
You can mess with human drivers in many ways as well. Many of these ways are illegal, some ways specific to autonomous cars might become illegal, but overall this is not a big issue today, I don’t see how it would become one in the future.
Perhaps because they are computers people will want to hack them?
 
  • #180
And there are people who want to hack websites. That doesn't stop them from existing, and taking over more and more functions - buying and booking things online, online banking, ...
The danger of typing Robert'); DROP table students;-- didn't exist when everything was done on paper - but that still didn't stop the conversion to computer databases. People found exploits like SQL injection, and other people wrote patches to fix them.

If STOP sign cycling jerseys become a trend and the cars struggle with them, someone will write a patch to better distinguish between cycling jerseys and actual stop signs.
 
  • #181
In a car with lane assist what happens at a symmetrical fork in the road? Do they see this as the road getting wider and send you down the middle?
 
  • Like
Likes russ_watters
  • #183
CWatters said:
In a car with lane assist what happens at a symmetrical fork in the road? Do they see this as the road getting wider and send you down the middle?

 
  • #184
From Fox: "... when in self-driving mode, car relies on human driver to intervene in case of emergency, but doesn't alert the driver ..."
 
  • #185
Bystander said:
From Fox: "... when in self-driving mode, car relies on human driver to intervene in case of emergency, but doesn't alert the driver ..."
Yeah, on the surface of it, that would seem to be a design flaw...
 
  • Like
Likes russ_watters
  • #186
If it could alert the driver it would know something is wrong and could react accordingly. The human is necessary for emergencies the car cannot detect (yet).
 
  • Like
Likes russ_watters
  • #187
NTSB: Uber’s sensors worked; its software utterly failed in fatal crash
https://arstechnica.com/cars/2018/0...led-by-ubers-self-driving-software-ntsb-says/

"
The National Transportation Safety Board has released its preliminary report on the fatal March crash of an Uber self-driving car in Tempe, Arizona. It paints a damning picture of Uber's self-driving technology.

The report confirms that the sensors on the vehicle worked as expected, spotting pedestrian Elaine Herzberg about six seconds prior to impact, which should have given it enough time to stop given the car's 43mph speed.

The problem was that Uber's software became confused, according to the NTSB. "As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path," the report says.

Things got worse from there. ..."
 
  • Like
Likes nsaspook
  • #188
Why is it that the emergency braking maneuver capability that in enabled for human drivers is disabled for computer control because it might cause erratic driving behavior. How so. One would think that such a capability would be a sort of fail safe feature for the computer system which might have an obstacle identification issue. Collision imminent, stop, no other action required.
 
  • Like
Likes russ_watters
  • #189
gleem said:
enabled for human drivers is disabled for computer control because it might cause erratic driving behavior.
Time for criminal negligence proceedings?
 
  • Like
Likes russ_watters
  • #190
Nsaspook... I think that video answers my question.

Looks like it just followed the brightest white line.
 
  • #191
gleem said:
Why is it that the emergency braking maneuver capability that in enabled for human drivers is disabled for computer control because it might cause erratic driving behavior. How so.
I wondered about that too. I was going to post a wise___ comment about erratic braking interrupting me when I was doing my fingernail polish in self-driving mode, but then I found a link that said it was meant to avoid false emergency braking for things like plastic bags blowing across the road. (I didn't save the link) Still, at least enabling the audible beep alert when the car detects something unusual seems so important...
 
  • #192
Yes but it is normally engaged with a human under control, How does a human override it if it is a bag and what if it is not a bag but thinks it is? Considering that it detected the women six second before impact an audible warning seems obvious. "Heads up something you should take a look at"
 
  • #193
gleem said:
Considering that it detected the women six second before impact an audible warning seems obvious. "Heads up something you should take a look at"
Spinnor said:
The problem was that Uber's software became confused, according to the NTSB. "As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path," the report says.

Things got worse from there. ..."
It appears that HAL was conflicted... Open the Pod Bay doors, HAL. HAL?!
 
  • #194
berkeman said:
It appears that HAL was conflicted...
Not true ...

.
 
Last edited:
  • Like
Likes berkeman and Borg
  • #195
gleem said:
Considering that it detected the women six second before impact an audible warning seems obvious. "Heads up something you should take a look at"

Even if you aren't busy on your phone it can take a long time to react to a warning and regain control...

https://jalopnik.com/what-you-can-do-in-the-26-seconds-it-takes-to-regain-co-1791823621

The study, conducted by the University of Southampton, reviewed the driving patterns of 26 men and women in situations, with and without a “distracting non-driving secondary task.”

Here’s the upshot: the study found that drivers in “non-critical conditions”—that is, say, when you’re not texting, fiddling with the radio, eating, or, as the study puts it, “normal conditions”—needed 1.9 to 25.7 seconds to assume control of the autonomous vehicle when prompted. Nearly 26 ******* seconds, and again, that is without the distractions. When participants were distracted, the response time to assume control ranged from 3.17 to 20.99 seconds.

https://www.digitaltrends.com/cars/automated-car-takeover-time/

It’s important to note that the test times didn’t measure driver response to whatever was going on, but just how long it took them to regain control. Then they had to assess the situation and take appropriate action. If you don’t find that a little scary, maybe think again.
 
Last edited:
  • Like
Likes nsaspook
  • #196
Spinnor said:
NTSB: Uber’s sensors worked; its software utterly failed in fatal crash
https://arstechnica.com/cars/2018/0...led-by-ubers-self-driving-software-ntsb-says/

The problem was that Uber's software became confused, according to the NTSB. "As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path," the report says.
What a terrible analysis! There is nothing in the (brief/preliminary) NTSB report that suggests the computer was confused or otherwise failed. A succession of increasingly accurate classifications is exactly what you would expect from a properly operating system. What isn't in the report is how long the sequence took or what the predicted paths and courses of action to avoid the collision were. What also isn't clear is if *all* avoidance actions were disabled or only "emergency" actions (e.g.; are all braking maneuvers to avoid a collision "emergency" actions?).

What I see is a system that - from what we have been told - was working properly but was purposely handicapped so the engineers could focus on other aspects of the system. As if we humans are AI in a video game that can be turned-off so the developers can focus on testing the scenery without getting shot at.
Bystander said:
Time for criminal negligence proceedings?
Yep. Time to subpoena the company org chart and start circling names of people to arrest.

Unfortunately, the driver will need to be first up because according to the wording in the report, she was responsible for avoiding the collision. She could simply be charged under existing distracted driving rules. I say "unfortunately" because it was her manager who assigned her a job description that was illegal at face value. But "just following orders" doesn't alleviate her responsibility. Still, everyone involved in the decisions to not have a dedicated driver (too expensive to staff the car with two people?) and to disable the emergency braking features of the software should be charged.

These companies need to be made to stop using the real world as an R&D environment. Starting to charge engineers and managers with homicide will do that. This incident appears to me to be covered under existing distracted driver laws, but the federal government should step in and set more explicit rules for self-driving cars and their testing.
 
  • Like
Likes NTL2009
  • #197
CWatters said:
Even if you aren't busy on your phone it can take a long time to react to a warning and regain control...
This, to me, is an "uncanny valley" that needs to be addressed better -- again, by law if necessary. It seems clear to me that a responsibility handoff situation is fundamentally incapable of working and should be avoided/not allowed. Drivers just aren't capable of turning on and off their attention in a way these systems need and even if they were, only at an increased reaction time. It also seems to me like the scenario only exists because self-driving features aren't good enough yet.
 
  • Like
Likes nsaspook
  • #198
+1 Pilots have been concerned for some time about the loss of situational awareness that automation can cause. Fortunately they usually have longer to react.
 
  • #199
russ_watters said:
but the federal government should step in and set more explicit rules for self-driving cars and their testing.

I completely agree with this by the way. Either Fed level, or individual states that permit these vehicles to run need clear guidelines, badly.
- - - - -

russ_watters said:
Yep. Time to subpoena the company org chart and start circling names of people to arrest.

Unfortunately, the driver will need to be first up because according to the wording in the report, she was responsible for avoiding the collision. She could simply be charged under existing distracted driving rules. I say "unfortunately" because it was her manager who assigned her a job description that was illegal at face value. But "just following orders" doesn't alleviate her responsibility. Still, everyone involved in the decisions to not have a dedicated driver (too expensive to staff the car with two people?) and to disable the emergency braking features of the software should be charged.

These companies need to be made to stop using the real world as an R&D environment. Starting to charge engineers and managers with homicide will do that.

To be honest, this is the kind of playbook I'd expect from a politically ambitious prosecutor (Bharara, Spitzer, Guiliani, probably Christie). When there is a lack of clear guidelines, aggressive prosecutors looking to promote themselves frequently fill the void. I'm kind of surprised it hasn't happened yet.

It also strikes me as very bad policy. There is a massive body count every year due to 'regular' car accidents in the US. (This towers over murders btw.) It's actually embarrassing how bad car fatalities are every year in the US vs say Ireland or whatever Western European country. Some reading here: https://www.economist.com/united-states/2015/07/04/road-kill note: "In 2014 some 32,675 people were killed in traffic accidents. In 2013, the latest year for which detailed data are available, some 2.3m were injured—or one in 100 licensed drivers."
- - - -
Self-driving cars, from what I can tell, are by far the most promising way out of this. Charging employees and/or companies with felonies in situation where I don't see malevolence-- or a well defined standard of criminal negligence-- seems myopic. We're talking about a single digit body count, possibly double digits over next couple years, from people (i.e. handful of self driving car companies and their personnel) acting in good faith to improve self driving cars (though I'm not happy with the way it's being done -- again rules needed) vs tens of thousands of fatalities every year and millions of injuries, and that's just in the US.
- - - -
My take: get clear rules first, and proceed from there.
 
  • #200
russ_watters said:
Unfortunately, the driver will need to be first up because according to the wording in the report, she was responsible for avoiding the collision. She could simply be charged under existing distracted driving rules
That's just a one opinion from a single report, and it can be squashed as being incorrect.
People hit deer, and other wild animals jutting out into traffic and are not labeled 'distracted'.
The comparison is made, apologies to the lady( victim ) and her family, to present the argument that if it had NOT been a self driving car, would any reasonable person been able to avoid the incident. Pedestrians while walking in the middle of traffic do assume a fair amount of responsibility for their own thought process and actions putting their own life in danger, and a reasonable driver may not able to avoid a collision with such a pedestrian.

As I said previously, the "safety" driver is superfluous for the most part and just there because - maybe the car stalls, flat tire, ... . Monitoring the gauges - have they never heard of data recoding and black boxes for cars. Checking for possible occurring incidents - the computer is supposed to be able to see and monitor and take appropriate action better than a human, so we have been told, and which is quite true if it all works correctly. On the 1% chance if the human is better, then that can be fixed into the AI.

Thing is, there is an expected lot of money to be made with self driving vehicles in the coming future. Right now, my point of view is that the first one off the staring block wins to some extent, and with a sort of 'wild west attitude' ( another my point of view ) and promotional 'this is the next best thing' and 'join the bandwagon or be left behind' puts pressure on the testing to be become live perhaps before it should. Some companies will be fine with more bugs than others, rightly or wrongly. And since transportation departments and cities and towns are all kind of awhgaw on how the AI car of the future will affect the commute and sharing the road of the future, the testing live serves that planning purpose also.

Since there is no oversight on what a safe AI car really is, and the agencies have allowed the live testing, I really do not comprehend how they can turn around and say criminal intent.
 

Similar threads

Replies
123
Views
11K
Replies
19
Views
11K
Replies
1
Views
10K
Replies
13
Views
3K
Back
Top