First fatal accident involving a car in self driving mode

In summary, the Tesla self-driving car was involved in a fatal crash. The car was programmed to ignore things that only have components "in the air" without a ground connection (e.g. overhead road signs).
  • #1
Borg
Science Advisor
Gold Member
2,179
4,232
It had to happen eventually. The world's first fatality while a car was driving itself.

Self-Driving Tesla Was Involved in Fatal Crash

In a statement, the National Highway Traffic Safety Administration said preliminary reports indicated that the crash occurred when a tractor-trailer made a left turn in front of the Tesla, and the car failed to apply the brakes.
If the car can't see a semi turning in front of it, you have to wonder if it can see a motorcycle or even other cars.
 
Physics news on Phys.org
  • #2
And it was clearly a problem of the software: it was programmed to ignore things that only have components "in the air" without a ground connection (e.g. overhead road signs. Source: Musk).

On the other hand, considering the numbers from Tesla: 130 million miles driven by the autopilot, on average one fatal accident every 94 million miles in the US. 1 fatal accident happened - still a bit below average, although we would need to split that up by road category to get a better estimate.
 
  • Like
Likes Hoophy and Greg Bernhardt
  • #3
I was thinking that might be the case where it was essentially looking under the truck's trailer. I guess that it saw the cab properly but after the cab cleared from the path, decided that the trailer was a billboard.
 
  • #4
Very likely the accident rate with autopilot will be lower than under human driver. The autopilot will never, ever, tire, get distracted, become emotionally impaired or drive after some drinks.

On the other hand accidents like this one will occur that the worst human driver would avoid. The vehicle will not show the least bit of extra caution with a car load of children, going through school zones, nor even attempt to slow down as it drives off a cliff at high speed because a sensor failed once in a 100 million miles.

The thing is, I don't think we're programmed to accept those kinds of trade offs.
 
Last edited:
  • #5
mheslep said:
The thing is, I don't think we're programmed to accept those kinds of trade offs.
Airplanes do more and more via autopilot and fly-by-wire, there are automatic metro trains, and so on. And at least initially, we can have the combination of an autopilot plus a human driver monitoring what the car does.
 
  • #6
mheslep said:
The thing is, I don't think we're programmed to accept those kinds of trade offs.

Computers and software will get better at driving. But humans, if anything, are getting worse. The other day I was thinking about this and can easily see the time when the idea of driving a car would seem threatening to many people. The same goes for piloting aircrafts. I suspect we will adapt very quickly. I would wager that In less than a generation, the notion of driving cars and flying planes will seem as archaic as riding a horse; and perhaps serve only as cheap thrills for the daring.

Also, there is no reason why computers can't be taught to recognize potential dangers like school zones. Heck, every school could have a beacon that warns all approaching traffic that children are present. But the car would already know that because of Google Maps. :D

Bottom line: In its relative infancy, self driving technology is already besting humans or doing about as well.
 
  • Like
Likes Salvador, Monsterboy and Pepper Mint
  • #7
mheslep said:
On the other hand accidents like this one will occur that the worst human driver would avoid. The vehicle will not show the least bit of extra caution with a car load of children, going through school zones, nor even attempt to slow down as it drives off a cliff at high speed because a sensor failed once in a 100 million miles.

The thing is, I don't think we're programmed to accept those kinds of trade offs.
I agree and I'm utterly shocked that autonomous cars have been allowed with very little government oversight so far. We don't know how they are programmed to respond to certain dangers or what Sophie's choices they would make. We don't know what they can and can't (a truck!) see.

Along a similar vein, I don't know if people remember, but there was a quite a little crisis in the late 1990s as airbags saved vast numbers of people, but killed a handful of children:
http://www.nytimes.com/2001/08/30/us/child-air-bag-deaths-drop-parents-get-bulk-of-credit.html

In our litigious society, people won't be very willing to accept this.
 
  • Like
Likes mheslep
  • #8
mfb said:
Airplanes do more and more via autopilot and fly-by-wire, there are automatic metro trains, and so on.
Both of those are much, much simpler sensing and control problems than cars are.
 
  • Like
Likes mheslep
  • #9
mfb said:
Airplanes do more and more via autopilot and fly-by-wire, there are automatic metro trains, and so on. And at least initially, we can have the combination of an autopilot plus a human driver monitoring what the car does.
With regard to avoidance and consequence, the case of aircraft autopilot is closer to a traditional car cruise control than the autonomous vehicle software on the road. Passenger aircraft do not maneuver autonomously on the crowded tarmac.

Autonomous vehicles are today on the road via this Tesla model, and as this accident shows, they are operating with no "human driver monitoring" required. I think the major auto companies have chosen a safer path, calling their autonomous software something like "super cruise control" , restricted to negative actions like avoiding impending accidents, or simple parking, while the driver is always in charge.
 
  • #10
russ_watters said:
In our litigious society, people won't be very willing to accept this.
Airbags got adopted.
russ_watters said:
Both of those are much, much simpler sensing and control problems than cars are.
Sure, and cars are the next step.
mheslep said:
and as this accident shows, they are operating with no "human driver monitoring" required
Tesla requires the driver to do the monitoring. In reality many drivers don't, but can you blame Tesla for this?
 
  • Like
Likes billy_joule
  • #11
mfb said:
Airbags got adopted.Sure, and cars are the next step.
Tesla requires the driver to do the monitoring. In reality many drivers don't, but can you blame Tesla for this?
But in this case the Tesla didn't have a driver, it had a passenger supposedly watching a movie. Was the movie playing on the Tesla's screen? You live by technology but you also die by technology, especially if you become a lab rat along the way.
 
  • #12
see my signature.

What was the driver doing ? It is reported he was watching Harry Potter.
http://sanfrancisco.cbslocal.com/2016/07/01/driver-in-fatal-tesla-crash-had-history-of-speeding/
Florida made a law back when TV's got transistorized requiring that a TV screen in an automobile must not be located within the driver's sight. What a sensible idea ! Even as a teenager i could see that.

Ffity years later my kids bought a SUV with a touchscreen smack dab in the middle of the dashboard with menu driven controls for airconditioner, radio, and other bells&whistles unimaginable to a luddite like me. There was even a telephone !.
What a preposterous state of affairs. Airlines used to have a Flight Engineer to tend to the machinery so the pilot could fly the plane.And I'm flabbergasted by the so called "journalists" trying to blame everybody except the driver
http://www.chron.com/business/technology/article/The-Latest-Tesla-crash-a-harbinger-of-industry-8335889.php
Okemah's[the truck's owner] driver was also cited for failing to obey a traffic control device in March and an improper lane change in December. And an inspection last year found the truck's tires were going bald.
http://gizmodo.com/fatal-tesla-crash-proves-full-autonomy-is-the-only-solu-1782923424
And this points to why fully autonomous vehicles are the only types of self-driving cars that make sense on our streets. Ever.
If you allow and even encourage drivers to be distracted by silly electronic doodads you are going to get distracted drivers and crashes.
My answer to this excess automation is to require that any vehicle having an electronic display screen or keyboard within driver's view be manned by two persons , one to drive and the other to tend the electronics. Maybe a third person to make sure one of the other two is driving.

Techno-narcissists who think up this silly stuff need a moment of introspection.

old jim
 
Last edited by a moderator:
  • Like
Likes CalcNerd and Hoophy
  • #13
Dotini said:
But in this case the Tesla didn't have a driver, it had a passenger supposedly watching a movie. Was the movie playing on the Tesla's screen? You live by technology but you also die by technology, especially if you become a lab rat along the way.
I don't know where the movie was playing, but playing a movie on a screen can also be for others in the car.
If you play a movie in a regular car, and make a crash, it is clearly your fault. Why? Because you are supposed to pay attention to the road. See the similiarity?
 
  • #14
mfb said:
Airbags got adopted.
I'm not saying auto-driving cars won't or even that they shouldn't (I'm a big fan of all tech). I'm just saying they've been released too soon, with too little oversight and have a long and rocky road ahead of them.
Tesla requires the driver to do the monitoring. In reality many drivers don't, but can you blame Tesla for this?
Yes. In the US, you can blame (sue) anyone for anything. People will.
 
  • #15
russ_watters said:
I'm just saying they've been released too soon, with too little oversight and have a long and rocky road ahead of them.
What improves if they are implemented later?
- while we don't have enough statistics, accident rates don't seem to go up from it
- the software quality certainly profits from larger datasets
russ_watters said:
Yes. In the US, you can blame (sue) anyone for anything. People will.
There is not just the US, and we'll see how successful those attempts are if you ignored several warnings and didn't follow the instructions at all.
 
  • #16
mfb said:
What improves if they are implemented later?
Bug fixes (like recognizing a truck!)
Regulations.
Simulations.
Training.

You used the analogy of plane autopilots and fly-by-wire. Despite being an easier problem, it is a good analogy: pilots get trained how to use them and what their limitations are (though they sometimes fail to interact with them properly) and they are tested for a decade before being "released". And federal regulators are on top of them the entire time. What bothers me is that Tesla/Google are literally beta testing these features on the public like they would a new version of Chrome ( https://www.teslamotors.com/presskit/autopilot ). But when Chrome crashes, no one gets hurt. Would people really be ok if Boeing beta tested its planes on the public?
- while we don't have enough statistics, accident rates don't seem to go up from it
I'm not suggesting they do. In fact, I agree they are almost certain to go down. One link I read described the legal problem this way: Today, 95% of car crashes are the driver's fault and 5% are the auto companys' fault. Auto-drive will reduce the 95% that are the drivers' fault and increase the 5% that are the auto companys' fault. The net result for the auto companies is likely to be an increase in liability due to the increase in their responsibility for crashes. That's going to be a difficult problem to deal with that I doubt people/companies have thought through and are ready for.
- the software quality certainly profits from larger datasets
Certainly. Some of that can be done with the sensors alone, and no autodrive (and I'm sure Tesla did that). And at least Google did a lot of its testing with unmanned cars or with cars that had a paid "test pilot" behind the wheel. But while it would certainly help improve the software faster to beta test an airliner in real-world conditions, carrying passengers, in real airports, it would be a really bad idea to do it.
There is not just the US...
Yes, I think the best bet would be to beta test these cars in countries where people have less right to sue if the car malfunctions. :D

And while I'm generally in favor of reducing this country's litigiousness, this is a case where I think there is clear negligence by the car companies. The very fact that they call it a "beta" test tells us they know the product is not ready for public use.
...and we'll see how successful those attempts are if you ignored several warnings and didn't follow the instructions at all.
People went to jail for not shutting off airbags that killed their kids. But car companies also got sued and lost lots and lots of money. I don't think the argument that you need to pay attention to your self driving car because it is flawed and creates new dangers will fly very well in court.
 
  • Like
Likes Monsterboy and mheslep
  • #17
I won't be an early adopter, but I am fully committed to the idea of fully automated cars. It will be the next big revolution once honed.
 
  • #18
russ_watters said:
I don't think the argument that you need to pay attention to your self driving car because it is flawed and creates new dangers will fly very well in court.
I'm going to go a step further: I predict that when legislators and lawyers get a chance to process this accident, the party will be over for Tesla: they will be forced to remove the auto-drive feature until such time as they (the government, the insurance institute, Tesla) can get it under control.

As for current owners of Teslas, hopefully the party is over for them too. No more playing Jenga and watching movies while your car drives itself. People need to take more seriously a beta test that can kill you if it goes wrong and I think now they will.

Even worse, I wonder how Tesla software engineers feel right now. You know, the ones who have been going through the data and collecting the bugs to fix. I wonder if they knew about this bug. What do you call it when someone let's a known design flaw kill a person?
 
  • #19
russ_watters said:
What bothers me is that Tesla/Google are literally beta testing these features on the public like they would a new version of Chrome ( https://www.teslamotors.com/presskit/autopilot ). But when Chrome crashes, no one gets hurt.

One of life's lessons is you weigh risks against consequences.
I regard whole computer industry as "flakey" for the reason you describe, lack of consequences foments irresponsibility..

I fixed computers long enough to be very wary of them.
I don't like the idea of them running nuke plants or automobiles without supervision.
Indeed, federal law used to forbid placing any nuclear reactor under the control of "a programmable device" .They're already working on computer collision avoidance .. let's take that one step further -

With the eye scan technology we have, that autopilot should have been aware of where the driver was looking. When his eyes strayed from the road it should have pulled over, parked the car
and announced in that stern feminine GPS voice :
"Your driving privilege is hereby revoked on grounds of inattention. When i sense a new buttprint in the drivers seat i will unlock the controls. "

If you're going to automate, automate. :wink:

Or, we could keep life simple..

old jim
 
Last edited:
  • #20
russ_watters said:
What improves if they are implemented later?
Bug fixes (like recognizing a truck!)
Regulations.
Simulations.
Training.
I think you misunderstood my question. For any given point in time, do you expect anything to be better if we delay implementation of the technology? Would Tesla's software at the end of 2016 be better if they wouldn't have drivers using it? Would regulations at the end of the year be better? Simulations? Anything else? Same question for 2017, 2018, ...

I'm not comparing "self-driving features first introduced 2026" with "self-driving features first introduced 2016". Of course the first would have better software. But then you have to ignore the tens of thousands that die from manual driving in those 10 years in the US alone.

I think the current accident will do a lot to improve the software, the regulations, and so on, reducing the number of accidents in the future.
 
  • #21
mfb said:
I think you misunderstood my question. For any given point in time, do you expect anything to be better if we delay implementation of the technology? Would Tesla's software at the end of 2016 be better if they wouldn't have drivers using it? Would regulations at the end of the year be better? Simulations? Anything else? Same question for 2017, 2018, ...
Absolutely (well, maybe not the regulations, as the government is pitifully slow to adapt to such things)!
 
  • #22
mfb said:
Airbags got adopted.
And laws were quickly passed requring children under a certain size to ride only the back seat, and/or with child seats that prevent seat belt injuries to which small children were particularly susceptipble. If air bags and seat belts were somehow still killing dozens of kids per year without such resolutions we might well have had problems with continuing with them as is.

...Tesla requires the driver to do the monitoring. In reality many drivers don't, but can you blame Tesla for this?
Of what use is a would-be autonomous vehicle which requires full time monitoring? I see a wink-wink subtext in the admonishment to drivers from Tesla.
 
Last edited:
  • Like
Likes russ_watters
  • #23
russ_watters said:
Absolutely (well, maybe not the regulations, as the government is pitifully slow to adapt to such things)!
Why?
How exactly do you expect having test data to make the software or simulations worse?
mheslep said:
And laws were quickly passed requring children under a certain size to ride only the back seat, and/or with child seats that prevent seat belt injuries to which small children were particularly susceptipble. If air bags and seat belts were somehow still killing dozens of kids per year without such resolutions we might well have had problems with continuing with them as is.
That's a good example how accidents ultimately lead to better safety.
mheslep said:
Of what use is a would-be autonomous vehicle which requires full time monitoring?
Same as cruise control. You still have to pay attention, but you don't have to do the boring parts like adjusting your foot position by a few millimeters frequently.
 
  • #24
It's beginning to sound like a Darwin award.

http://www.teslarati.com/will-spacex-overcome-tech-challenges-mars-presents-countdown-mars-part-two/


what's that Mars link doing there ? Should be this one.

http://www.teslarati.com/witnesses-details-deadly-tesla-accident/
According to another witness that was driving eastbound on U.S. 27A – the same interstate [ that's a mistake- it's not interstate jh ]the Tesla was traveling on – the Model S on Autopilot passed her while she was driving at 85 mph. This detail not mentioned by the official police report filed by the Florida Highway Patrol provides further insight into why the deadly collision may have happened with the tractor trailer that turned before it.

That road is not interstate or even limited access though it is 4 lane divided.
TeslaWilliston.jpg
How far down the road does a Tesla autopilot look?

65 Mph Braking Distance
Thinking Distance: 64 ft (20 m)
Braking Distance: 211 ft (64 m)
Stopping Distance: 275 ft (84 m)
 
Last edited:
  • Like
Likes Dotini
  • #25
Greg Bernhardt said:
I won't be an early adopter, but I am fully committed to the idea of fully automated cars. It will be the next big revolution once honed.

I will never adopt. Burn outs and donuts are just too much fun! That is why I drive a 1970 Chevelle with almost 475 hp to the flywheel. No autopilot needed.
 
  • #26
jim's link said:
From the description of the accident, it seems like death or serious injury was inevitable even if the driver had been in full control of the car.

jim hardy said:
It's beginning to sound like a Darwin award.
Not if the report you linked is right.
 
  • #27
mfb said:
...Same as cruise control. You still have to pay attention, but you don't have to do the boring parts like adjusting your foot position by a few millimeters frequently.
If autonomous was the same as cruise control it would be no better than cruise control. Autonomous means ... autonomous, i.e. to "act independently", and not with supervision, by definition.

Reckless creation of otherwise dangerous buildings, machines, drugs, etc are not sanctioned in the public spaces with the rationale that the eventual catastrophic failure will lead to better outcomes, even the catastrophe would in fact lead to future better outcomes, because one can not use another as a lab rat. Elon Musk is free to autonomously drive around with his five kids in the car for a few hundred thousand miles if he wants to serve the greater good, but he's not justified in recruiting others to beta test who can not understand the limitations of the autonomous software.
 
  • #28
The main difference with automated cars and people is that every time an automated car makes a mistake, the entire technology improves, and risk for everyone decreases. As the algorithm improves and evolves, the risk continually goes down. With people, no matter how many times someone kills themselves or someone else, the risk pretty much stays the same - we don't really learn from other people's mistakes, and often we don't learn from our own, because we're dead by that point.

I don't think there's much difference between an automated car and public transportation. In both cases you're fully entrusting your life to a driver who has been deemed by a public authority to be a competent operator. A human bus driver or train conductor can just as easily make a mistake and kill you as an automated car algorithm. Why should you be more afraid of an automated car than you are of taking the bus?
 
  • #29
dipole said:
I don't think there's much difference between an automated car and public transportation
That's profoundly myopic, akin to saying there's not much difference between the latest clever soccer playing robot and human beings, because they both can make a basic play plan, kick the ball. I say profoundly, because navigating the world and dealing with others out there on the road is not just a bit more complex than kicking a ball but vastly more complex, and so are the consequences for getting it wrong.
 
  • #30
What's important to me about this accident is not that the automated system missed a big while trailer with little contrast on a brightly lit sky, (something people do all the time while driving) it's this.

The autopilot failed to disengage after the deadly wreak and continued driving for quite a distance.
http://www.teslarati.com/witnesses-details-deadly-tesla-accident/
Bobby Vankavelaar owns the home where the mangled Model S driving on Autopilot eventually came to a stop after colliding with a tractor trailer and killing 40-year old Joshua Brown who was behind the wheel. In a video published byABC Action News, Vankavelaar tells the news outlet that the Tesla traveled “hundreds of yards from the point of impact, through a fence into an open field, through another fence and then avoiding a bank of trees before being unable to swerve and miss a power pole that eventually stopped the car a few feet away.”
 
  • #31
jim's link said:
From the description of the accident, it seems like death or serious injury was inevitable even if the driver had been in full control of the car.
jim hardy said:
It's beginning to sound like a Darwin award.

mfb said:
Not if the report you linked is right.
well - you got me . i found that statement in the June 30 article at the Mars link in my post, alright. It's written by one Steve Hanley.
But you won't find the quote that i'd pasted anywhere in Hanley's June 30 article.
And it's only that reporter Steve Hanley's spin on it. His words do not appear in Tesla's statement , which he included in his article, but look how cleverly he led us to believe they were...

Back to the point
That is NOT the article from which i copied the link.
I copied the link from a July 1 article which makes no such claim. That article is written by one "Gene" and is linked below.
I suppose both Hanley and Gene are Tesla aficionados in that they write for 'Teslarati' ...

What a strange webpage Tesla has there
the address changes as you scroll down according to which article is at the bottom of the screen
Symptomatic of too much computer and too little sense.
Isn't it a basic IT premise "Do not write self modifying code" ?

So the link had changed itself when i 'copied' it . Sorry about that i never knew it could happen or i woulda checked. I always do at Youtube.
If you go to 'Gene's ' July 1article that i had on top of my screen when i clicked 'copy'
you won't find that speculation
http://www.teslarati.com/witnesses-details-deadly-tesla-accident/

Artificial intelligence and natural stupidity don't mix. My computer mistake demonstrates it as surely as did that poor driver's. But mine was inconsequential.

Again, sorry for any confusion.So Mr mfb - You got me good with that one ! :DD

old jim

ps i fixed the link up above
 
Last edited:
  • #32
jim hardy said:
How far down the road does a Tesla autopilot look?
The radar likely covers the front out to 200m, two football fields, perhaps with something like a 20 degree beam.

Many have contributed to the field autonomous vehicles, but the researcher most directly responsible for moving it from ditch finding practice that it was to what we see today is Sebastian Thru and the Stanford DARPA vehicle; a description of how it all originally worked is here, with vehice sensors described on page 664.
http://isl.ecst.csuchico.edu/DOCS/darpa2005/DARPA 2005 Stanley.pdf
 
  • #33
nsaspook said:
...The autopilot failed to disengage after the deadly wreak and continued driving for quite a distance.
http://www.teslarati.com/witnesses-details-deadly-tesla-accident/

The autopilot did not "fail" to disengage, but continued as designed. It continued until it crashed (again), finally into a poll per the bystander report. The video stating same:


Of course it did. The vehicle sensors and traction systems were still functioning with the roof sheared off. There is no, 'stop in the event of tragic loss of life' software routine.
 
  • Like
Likes russ_watters
  • #34
mheslep said:
Of course it did. The vehicle sensors and traction systems were still functioning with the roof sheared off. There is no, 'stop in the event of tragic loss of life' software routine.

There should be a some sort of sensor (impact or for structural integrity like still having a roof) driven routine that would bring the car to a Autopilot halt (Automatic Emergency Braking) in the event of a major accident.
 
Last edited:
  • #35
Why do you suppose it left the road rather than continue on into downtown Williston ?
Physical damage to steering wheel ? Or was it trying to pull over and park ? Or did it lose a camera and couldn't see?
 

Similar threads

  • General Discussion
Replies
28
Views
2K
Replies
22
Views
1K
  • Introductory Physics Homework Help
Replies
9
Views
3K
  • General Engineering
Replies
24
Views
3K
  • Mechanics
Replies
1
Views
3K
  • General Engineering
Replies
19
Views
10K
Replies
7
Views
2K
  • Introductory Physics Homework Help
Replies
2
Views
5K
  • Classical Physics
Replies
9
Views
6K
  • General Discussion
Replies
1
Views
8K
Back
Top