News First fatal accident involving a car in self driving mode

Click For Summary
The first fatal accident involving a self-driving Tesla occurred when a tractor-trailer made a left turn in front of the vehicle, which failed to brake. Concerns were raised about the software's ability to detect obstacles, particularly those without a ground connection, leading to questions about its reliability in various driving scenarios. Despite the incident, statistics suggest that Tesla's autopilot may still have a lower accident rate compared to human drivers. The discussion highlighted the need for better oversight and regulation of autonomous vehicles, as well as the ethical implications of allowing such technology to operate without adequate human monitoring. Overall, the incident underscores the challenges and risks associated with the early deployment of self-driving technology.
  • #31
jim's link said:
From the description of the accident, it seems like death or serious injury was inevitable even if the driver had been in full control of the car.
jim hardy said:
It's beginning to sound like a Darwin award.

mfb said:
Not if the report you linked is right.
well - you got me . i found that statement in the June 30 article at the Mars link in my post, alright. It's written by one Steve Hanley.
But you won't find the quote that i'd pasted anywhere in Hanley's June 30 article.
And it's only that reporter Steve Hanley's spin on it. His words do not appear in Tesla's statement , which he included in his article, but look how cleverly he led us to believe they were...

Back to the point
That is NOT the article from which i copied the link.
I copied the link from a July 1 article which makes no such claim. That article is written by one "Gene" and is linked below.
I suppose both Hanley and Gene are Tesla aficionados in that they write for 'Teslarati' ...

What a strange webpage Tesla has there
the address changes as you scroll down according to which article is at the bottom of the screen
Symptomatic of too much computer and too little sense.
Isn't it a basic IT premise "Do not write self modifying code" ?

So the link had changed itself when i 'copied' it . Sorry about that i never knew it could happen or i woulda checked. I always do at Youtube.
If you go to 'Gene's ' July 1article that i had on top of my screen when i clicked 'copy'
you won't find that speculation
http://www.teslarati.com/witnesses-details-deadly-tesla-accident/

Artificial intelligence and natural stupidity don't mix. My computer mistake demonstrates it as surely as did that poor driver's. But mine was inconsequential.

Again, sorry for any confusion.So Mr mfb - You got me good with that one ! :DD

old jim

ps i fixed the link up above
 
Last edited:
Physics news on Phys.org
  • #32
jim hardy said:
How far down the road does a Tesla autopilot look?
The radar likely covers the front out to 200m, two football fields, perhaps with something like a 20 degree beam.

Many have contributed to the field autonomous vehicles, but the researcher most directly responsible for moving it from ditch finding practice that it was to what we see today is Sebastian Thru and the Stanford DARPA vehicle; a description of how it all originally worked is here, with vehice sensors described on page 664.
http://isl.ecst.csuchico.edu/DOCS/darpa2005/DARPA 2005 Stanley.pdf
 
  • #33
nsaspook said:
...The autopilot failed to disengage after the deadly wreak and continued driving for quite a distance.
http://www.teslarati.com/witnesses-details-deadly-tesla-accident/

The autopilot did not "fail" to disengage, but continued as designed. It continued until it crashed (again), finally into a poll per the bystander report. The video stating same:


Of course it did. The vehicle sensors and traction systems were still functioning with the roof sheared off. There is no, 'stop in the event of tragic loss of life' software routine.
 
  • Like
Likes russ_watters
  • #34
mheslep said:
Of course it did. The vehicle sensors and traction systems were still functioning with the roof sheared off. There is no, 'stop in the event of tragic loss of life' software routine.

There should be a some sort of sensor (impact or for structural integrity like still having a roof) driven routine that would bring the car to a Autopilot halt (Automatic Emergency Braking) in the event of a major accident.
 
Last edited:
  • #35
Why do you suppose it left the road rather than continue on into downtown Williston ?
Physical damage to steering wheel ? Or was it trying to pull over and park ? Or did it lose a camera and couldn't see?
 
  • #36
"Major accident" is again a term based in human common sense. There is no major accident sensor. Structural integrity sensor grids are impractical in any kind of complex machine in mass public use. Accelerometers are practical, which the vehicle must have. So the vehicle encountered something like, I dunno, a 5g or 10g rearward spike for a few milliseconds and post event it is still moving otherwise unimpaired as far as the sensors show; the road ahead is now perfectly clear. What should the software do? Emergency stop, regardless of road conditions and traffic? Maneuver off the road, close to the ditch?

Suppose that sensor signal was instead from unavoidably hitting a deer on a busy highway with traffic moving 75 mph? Me, I pull off the road as soon as I gauge the safety of the shoulder allows it; I don't emergency stop on the highway.
 
Last edited:
  • Like
Likes russ_watters
  • #37
tesla-truck-accident-31.jpg

The Florida Highway Patrol described the events:

When the truck made a left turn onto NE 140th Court in front of the car, the car’s roof struck the underside of the trailer as it passed under the trailer. The car continued to travel east on U.S. 27A until it left the roadway on the south shoulder and struck a fence. The car smashed through two fences and struck a power pole. The car rotated counter-clockwise while sliding to its final resting place about 100 feet south of the highway.

Here’s our birds-eye visualization of what happened based on the information released by the police:
http://electrek.co/2016/07/01/understanding-fatal-tesla-accident-autopilot-nhtsa-probe/
 
  • Like
Likes Monsterboy and mheslep
  • #38
mheslep said:
"Major accident" is again a term based in human common sense. There is no major accident sensor. Structural integrity sensor grids are impractical in any kind of complex machine in mass public use. Accelerometers are practical, which the vehicle must have. So the vehicle encountered something like, I dunno, a 5g or 10g rearward spike for a few milliseconds and post event it is still moving otherwise unimpaired as far as the sensors show; the road ahead is now perfectly clear. What should the software do? Emergency stop, regardless of road conditions and traffic? Maneuver off the road, close to the ditch?

A structural integrity sensor could be something as simple as a wire current loop in the roof supports to detect if it's missing. Yes, Emergency stop is a good option if the other option is to continue blindly into a house, building or group of people. It's just dumb luck there was a open field in this case.
 
  • #39
nsaspook said:
A structural integrity sensor could be something as simple as a wire current loop in the roof supports to detect if it's missing.
And in every structural piece of the vehicle, with wiring running across all structural components, all returning to some hub. A break in any part of which would need to trigger, what, an emergency stop, over the life of the vehicle. It's not done because its not remotely practical.
 
  • #40
mheslep said:
And in every structural piece of the vehicle, with wiring running across all structural components, all returning to some hub. A break in any part of which would need to trigger, what, an emergency stop, over the life of the vehicle. It's not done because its not remotely practical.

I agree it would be impractical to cover the entire car but the roof being a part of the car that can be easily completely destroyed (a missing roof after a car runs under a truck is not uncommon) while leaving most of the cars important functions operational would be practical to sense with simple trip wire interlocks.
 
  • #41
mheslep said:
That's profoundly myopic, akin to saying there's not much difference between the latest clever soccer playing robot and human beings, because they both can make a basic play plan, kick the ball. I say profoundly, because navigating the world and dealing with others out there on the road is not just a bit more complex than kicking a ball but vastly more complex, and so are the consequences for getting it wrong.

From the point of view of assessing risk there is no difference. What do you know about the state of AI in regards to automated cars? Nothing I'm sure, but the developers of these systems and the companies investing millions of dollars in R&D into developing this technology seem confident that it is a solvable problem, and I'm willing to believe people who put their money where their mouth is.

What does the complexity of the problem have to do with risk assessment? At the end of the day, if the statistics show that automated cars do at least as well as human drivers, then from a public policy point of view, automated cars shouldn't really be seen as much different than public or private transportation - one entity is in operation of a vehicle as a service for another. You seem to be of the opinion that if an automated car makes a mistake and kills someone, that is somehow worse than if a human makes the same mistake and kills someone. This is pure BS and just fear of something new. You're use to the idea of humans messing up and killing people, so you're ok with it. Robots making a mistake and killing people is scary and horrible - even if at the end of the day far fewer people will die in car accidents.
 
  • #42
mheslep said:
If autonomous was the same as cruise control it would be no better than cruise control. Autonomous means ... autonomous, i.e. to "act independently", and not with supervision, by definition.
Does Tesla call its autopilot "autonomous"? But I don't want to argue about semantics. You asked where the advantage of autopilot is if you still have to watch traffic. And the answer is: the same as for cruise control, just better. It is more convenient.

Every kitchen knife is a potentially deadly weapon - you can kill yourself (or others) if you use it in the wrong way. There are deadly accidents with knifes all the time. Why do we allow selling and using kitchen knifes? Because they can be used properly for useful things.

Teslas autopilot is not a new knife, it is a feature for the knife that can improve the safety - if you use it properly. If you use it in a completely wrong way, you still get the same accident rate as before.

nsaspook said:
Emergency stop is a good option if the other option is to continue blindly into a house, building or group of people. It's just dumb luck there was a open field in this case.
Was the car blind? I would expect that it would do an emergency stop in that case, but where do you see indications of the car being blind to the environment?
 
  • #43
mfb said:
Was the car blind? I would expect that it would do an emergency stop in that case, but where do you see indications of the car being blind to the environment?

When it veers off the lane, travels across a field hitting two fences and a pole.
 
Last edited:
  • Like
Likes Monsterboy and russ_watters
  • #44
One would expect that, just like airbags can be deployed by a sensor that reads high deceleration, the autpilot should be designed to sense any impact of sufficient magnitude.

If the autopilot senses g-forces that exceed some limit (such as part of the car impacting a tractor trailer, or even a guard rail) it should slow to a stop just to be on the safe side.
 
  • Like
Likes Monsterboy
  • #45
From the point of view of assessing risk there is no difference
Yes there is, a substantial difference. The raw accident rate is not the the only metric of significance. The places and kinds of accidents would change, the share of accidents involving children, pedestrians, and the handicapped would change, the share of accidents involving mass casualties would change.

dipole said:
... What do you know about the state of AI in regards to automated cars? Nothing I'm sure, ...
Quite a bit; I worked on vision system of one of the first vehicles that might be called successful. To realize these vehicles are sophisticated *and* utterly oblivious machines, I think one should ride in one along a dirt road, see the software handle the vehicle with preternatural precision for hundreds of miles, and then of a sudden head off the road at speed towards a deep gully to avoid a suddenly appearing dust devil.
 
  • #46
mheslep said:
Quite a bit; I worked on vision system of one of the first vehicles that might be called successful. To realize these vehicles are sophisticated *and* utterly oblivious machines, I think one should ride in one along a dirt road, see the software handle the vehicle with preternatural precision for hundreds of miles, and then of a sudden head off the road at speed towards a deep gully to avoid a suddenly appearing dust devil.

I wouldn't say you worked on one that was successful. ;)
 
  • #47
DaveC426913 said:
One would expect that, just like airbags can be deployed by a sensor that reads high deceleration, the autpilot should be designed to sense any impact of sufficient magnitude.
Either the g magnitude or time of the shock was insufficient to trip the air bags in this crash, i.e. 7 g's. Would you come to stop on major highway, traffic doing 75 mph, if you had hit, say, a deer, then thrown clear of the vehicle, continue until you could maneuver off road?

Trying to come up with Yet Another Simple Sensor Rule might make the vehicle a bit safer, or not, but it is nothing like scene understanding which people possess, the holy grail of AI. Declaring the vehicle safe based sensor limits is a mistake.
 
  • Like
Likes Monsterboy
  • #48
mheslep said:
Either the g magnitude or time of the shock was insufficient to trip the air bags in this crash,
Huh. Did not know that.
 
  • #49
mheslep said:
Would you come to stop on major highway, traffic doing 75 mph, if you had hit, say, a deer, then thrown clear of the vehicle, continue until you could maneuver off road?

I've seen people do both in similar situations.

Trying to come up with Yet Another Simple Sensor Rule might make the vehicle a bit safer, or not, but it is nothing like scene understanding which people possess, the holy grail of AI. Declaring the vehicle safe based sensor limits is a mistake.

Sure, but how much do people understand the scene? It is not just a question of what humans can do, but what they actually do.

In principle you could reduce this to absurdity. For example, should the car recognize that the tire on the adjacent car is low on pressure and dangerous? Also, which is more likely to have an advantage during an emergency, a planned strategy based on the physics of the car, or a panic reaction?
 
  • #51
russ_watters said:
I agree and I'm utterly shocked that autonomous cars have been allowed with very little government oversight so far. We don't know how they are programmed to respond to certain dangers or what Sophie's choices they would make. We don't know what they can and can't (a truck!) see.
How do you know when an object in the field-of-view has a signal-to-noise ratio too low to be detected?

It isn't just Tesla's autopilot that has that problem. Many accidents have happened when human drivers had their vision momentarily interrupted by reflection or by glare caused by scattering or refraction.
 
  • #52
russ_watters said:
I agree and I'm utterly shocked that autonomous cars have been allowed with very little government oversight so far. We don't know how they are programmed to respond to certain dangers or what Sophie's choices they would make. We don't know what they can and can't (a truck!) see.
The same can be said about human driven cars, word for word.
There were 29,989 fatal motor vehicle crashes in the United States in 2014 in which 32,675 deaths occurred.
This resulted in 10.2 deaths per 100,000 people and 1.08 deaths per 100 million vehicle miles traveled.
http://www.iihs.org/iihs/topics/t/general-statistics/fatalityfacts/state-by-state-overview
 
  • Like
Likes mfb
  • #53
mfb said:
Why?
How exactly do you expect having test data to make the software or simulations worse?
[emphasis added]
Huh? Was that a typo? Your previous question was "For any given point in time, do you expect anything to be better if we delay implementation of the technology?" How did "better" become "worse"?

I'm advocating more data/testing before release, not less. Better data and better simulations, resulting in safer cars. I can't imagine how my meaning could not be clear, but let me describe in some detail how I think the process should go. Some of this has probably happened, most not. First a little background on the problem. The car's control system must contain at least the following elements:
A. Sensor hardware to detect the car's surroundings.
B. Control output hardware to actually steer the car, depress the brakes, etc.
C. A computer for processing the needed logic:
D. Sensor interpretation logic to translate the sensor inputs into a real picture of the car's surroundings.
E. Control output logic to make the car follow the path it is placed on.
F. Decision-making logic to determine what to do when something doesn't go right.

Here's the development to release timelime that I think should be taking place:
1. Tesla builds and starts selling a car, the Model S, with all the hardware they think is needed for autonomous control. This actually happened in October of 2014.

2. Tesla gathers data from real-world driving of the car. The human is driving, the sensors just passively collecting data.

3. Tesla uses the data collected to write software for the various control components and create simulations to test the control logic.

4. Tesla installs the software to function passively in the cars. By this I mean the car's computer does everything but send the output to the steering/throttle/brake. The car records the data and compares the person's driving to the computer's simulation of the driving. This would flag major differences between behaviors so the software could be refined and point to different scenarios that might need to be worked-out in simulation.

5. Tesla deploys a beta test of the system using a fleet of trained and paid test "pilots" of the cars, similar to how Google had an employee behind the wheel of their Street View autonomous cars. These drivers would have training on the functional specs of the car and its behaviors -- but most of all, how to behave if they think the car may be malfunctioning (don't play "chicken" with it).

6. Tesla makes the necessary hardware and software revisions to finalize the car/software.

7. Tesla produces a report of the beta program's results, the functional specifications of the autopilot and a few test cars for the Insurance Institute and NHTSA for their approval.

8. The autopilot is enabled (this actually happened in October of 2015).

Each of these steps, IMO, should take 1-2 years (though some would overlap) and the total time from first test deployment to public release should take about a decade. Tesla actually enabled the feature about 1 year after the sensors started being installed in the cars, so it couldn't possibly have done much of anything with most of the steps and we know for sure they made zero hardware revisions to the first cars with the capability (which doesn't mean they haven't since improved later cars' sensor suites). Since the software is upgraded "over the air", the cars have some communication ability with Tesla, but how much I don't know. Suffice to say though the amount of data these cars generate and process would have to be in the gigabytes or terabytes per hour range. A truly massive data processing effort.

So again: Tesla has created and implemented the self-driving features really, really fast and using the public as guinea pigs. That, to me, is very irresponsible.
 
Last edited:
  • Like
Likes Pepper Mint
  • #54
eltodesukane said:
The same can be said about human driven cars, word for word.
Yes, but human drivers have something Tesla's autopilot doesn't: a license. I'm suggesting, in effect, that the Telsa autopilot should have a license before being allowed to drive.
 
  • #55
mheslep said:
Of what use is a would-be autonomous vehicle which requires full time monitoring? I see a wink-wink subtext in the admonishment to drivers from Tesla.
Agreed. It is irresponsible (and in most cases illegal) to watch a movie while your autopilot is driving the car. But it is also irresponsible to enable a feature that can be so easily misused (because it is a self-contradictory feature).

Moreover, a product that is in "beta" testing is by definition not ready for public release. It is spectacularly irresponsible to release a life safety critical device into the public domain if it isn't ready. All of our nuts and bolts and bolts discussion of the issue is kinda moot since Tesla readily admits the system is not ready to be released.
 
Last edited:
  • Like
Likes Greg Bernhardt
  • #56
mfb said:
Same as cruise control. You still have to pay attention, but you don't have to do the boring parts like adjusting your foot position by a few millimeters frequently.
The difference between "autonomous" and "cruise control" is pretty clear-cut: with cruise control, you must still be paying attention because you are still performing some of the driving functions yourself and the functions overlap so performing one (steering) means watching the others in a way that makes you easily able to respond as needed (not running into the guy in front of you). And, more importantly, you know that's your responsibility.

Autonomous, on the other hand, is autonomous.

Or, another way, with more details: With cruise control and other driver assist features, the driver has no choice but to maintain control over the car. Taking a nap isn't an option. So they are still wholly responsible for if the car crashes or drives safely. With an "autonomous" vehicle, even one that has the self-contradictory requirement of having a person maintain the capability to take back control, the car must be capable of operating safely on its own. Why? Because the reaction time of a person who doesn't expect to have to take over control is inherently longer than a person who does expect to have to take over control. For example, if you have your cruise control on and see brake lights in front of you, you know it is still up to you to apply the brakes. If you have the car in autopilot and see brake lights, you assume the car will brake as necessary and so you don't make an attempt to apply the brakes. In a great many accident scenarios, this delay will make it impossible for the human to prevent the accident if the car doesn't do its job. In the accident that started this thread, people might assume that the driver was so engrossed in his movie that he never saw the truck, but it is also quite possible that he saw the truck and waited to see how his car would react.

So not only is it irresponsible to provide people with this self-contradictory feature, Tesla's demand that the people maintain the responsibility for the car's driving is an inherently impossible demand to fulfill.
 
Last edited:
  • Like
Likes OCR and mheslep
  • #57
dipole said:
What do you know about the state of AI in regards to automated cars? Nothing I'm sure...
I know one thing, per Tesla's own description of it: it isn't ready for public release.
...but the developers of these systems and the companies investing millions of dollars in R&D into developing this technology seem confident that it is a solvable problem, and I'm willing to believe people who put their money where their mouth is.
I'm all for them solving the problem before implementing it.

Imagine you're a Tesla software engineer. You're the guy who is collecting and dealing with all the bug reports. I wonder if one of them already had on his list a "Can't see white objects against a bright but cloudy sky" bug on his list? How would you feel? How would the police feel about it if a known bug killed a person?
 
  • #58
russ_watters said:
mfb said:
Why?
How exactly do you expect having test data to make the software or simulations worse?
Huh? Was that a typo? Your previous question was "For any given point in time, do you expect anything to be better if we delay implementation of the technology?" How did "better" become "worse"?
It was not a typo. You seem to imply that not collecting data now (=having autopilot as option) would improve the technology. In other words, collecting data would make things worse. And I wonder why you expect that.
russ_watters said:
I'm advocating more data/testing before release, not less. Better data and better simulations, resulting in safer cars.
Your suggestion leads to more traffic deaths, in every single calendar year. Delaying the introduction of projects like the autopilot delays the development process. At the time of the implementation, it will be safer if the implementation is done later, yes - but up to then you have tens of thousands of deaths per year in the US alone, most of them avoidable. The roads in 2017 with autopilot will (likely) have fewer accidents and deaths than the roads in 2017 if autopilot would not exist on the roads. And the software is only getting better, while human drivers do not. In 2018 the software will be even better. How long do you want to delay implementation?
russ_watters said:
Yes, but human drivers have something Tesla's autopilot doesn't: a license. I'm suggesting, in effect, that the Telsa autopilot should have a license before being allowed to drive.
The autopilot would easily get the human driver's license if there was one limited to the roads autopilot can handle.
russ_watters said:
For example, if you have your cruise control on and see brake lights in front of you, you know it is still up to you to apply the brakes.
I can make the same assumption with more modern cruise control software that automatically keeps a safe distance to the other car. And be wrong in exactly the same way.
russ_watters said:
So not only is it irresponsible to provide people with this self-contradictory feature, Tesla's demand that the people maintain the responsibility for the car's driving is an inherently impossible demand to fulfill.
It is inherently impossible to watch the street if you don't have to steer? Seriously? In this case I cannot exist, because I can do that as passenger, and every driving instructor does it as part of their job on a daily basis.
 
  • #59
How does one guard against the mentality of "Hey y'all, hold my beer and watch this !" that an autopilot invites ?
Not build such overautomated machines
or make them increasingly idiot proof ?
jim hardy said:
With the eye scan technology we have, that autopilot sh could have been aware of where the driver was looking.

Increasing complexity is Mother Nature's way of "confusing our tongues".
 
  • #60
mfb said:
It was not a typo. You seem to imply that not collecting data now (=having autopilot as option) would improve the technology. In other words, collecting data would make things worse. And I wonder why you expect that.
I don't understand. I'm very explicitly saying I want to collect more data, not less data and you are repeating back to me that I want to collect less data and not more data. I'm really not sure where to go from here other than to request that you paint for me a detailed picture of the scenario you are proposing (or you think is what Tesla did) and then I can tell you how it differs from what I proposed.
Your suggestion leads to more traffic deaths, in every single calendar year.
That's an assumption on your part and for the sake of progressing the discussion, I'll assume it's true, even though it may not be.

Along a similar vein, the FDA stands in the way of new drug releases, also almost certainly costing lives (and certainly also saving lives). Why would they do such a despicable thing!? Because they need the new drug to have proven it is safe and effective before releasing it to the public.

See, with Tesla you are assuming it will, on day 1, be better than human drivers. It may be or it may not be. We don't know and they weren't required to prove it. But even if it was, are you really willing to extend that assumption to every car company? Are you willing to do human trials on an Isuzu or Pinto self driving car?

Verifying that something is safe and effective (whether a drug or a self-driving car) is generally done before releasing it to the public because it is irresponsible to assume it will be good instead of verifying it is good.
Delaying the introduction of projects like the autopilot delays the development process.
It doesn't delay the development process, it extends the development process prior to release. I think maybe the difference between what you are describing and what I am describing is that you are suggesting that development can occur using human trials. I think that suggestion is disturbing. But yeah, you are right: if you skip development and release products that are unfinished, you can release products faster.
At the time of the implementation, it will be safer if the implementation is done later, yes - but up to then you have tens of thousands of deaths per year in the US alone, most of them avoidable.
Maybe. And maybe not. But either way, yes, that's how responsible product development works.
I can make the same assumption with more modern cruise control software that automatically keeps a safe distance to the other car. And be wrong in exactly the same way.
Agreed. So I'd also like to know the development cycle and limitations of such systems too. But we're still two steps removed from the full autopilot there (1. The human knows there is a limit to the system's capabilities, 2. The human is still an inherently essential part of the driving system), but it is definitely a concern of mine.
It is inherently impossible to watch the street if you don't have to steer? Seriously?
Huh? I don't think you are reading my posts closely enough. I don't think what I was describing was that difficult and that bears no relation to it. Here's what I said: (it was in bold last time too): Because the reaction time of a person who doesn't expect to have to take over control is inherently longer than a person who does expect to have to take over control.

An auto-brake feature (for example) can reliably override a human if the human makes a mistake. A human cannot reliably override a computer if the computer makes a mistake. That's why it is unreasonable to expect/demand that a person be able to override the computer if it messes-up. There is nothing stopping them from trying, of course -- but they are very unlikely to reliably succeed.
 

Similar threads

  • · Replies 28 ·
Replies
28
Views
2K
  • · Replies 293 ·
10
Replies
293
Views
22K
  • · Replies 22 ·
Replies
22
Views
5K
  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 24 ·
Replies
24
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 19 ·
Replies
19
Views
12K
Replies
7
Views
3K
Replies
1
Views
5K
  • · Replies 9 ·
Replies
9
Views
8K