News First fatal accident involving a car in self driving mode

AI Thread Summary
The first fatal accident involving a self-driving Tesla occurred when a tractor-trailer made a left turn in front of the vehicle, which failed to brake. Concerns were raised about the software's ability to detect obstacles, particularly those without a ground connection, leading to questions about its reliability in various driving scenarios. Despite the incident, statistics suggest that Tesla's autopilot may still have a lower accident rate compared to human drivers. The discussion highlighted the need for better oversight and regulation of autonomous vehicles, as well as the ethical implications of allowing such technology to operate without adequate human monitoring. Overall, the incident underscores the challenges and risks associated with the early deployment of self-driving technology.
  • #51
russ_watters said:
I agree and I'm utterly shocked that autonomous cars have been allowed with very little government oversight so far. We don't know how they are programmed to respond to certain dangers or what Sophie's choices they would make. We don't know what they can and can't (a truck!) see.
How do you know when an object in the field-of-view has a signal-to-noise ratio too low to be detected?

It isn't just Tesla's autopilot that has that problem. Many accidents have happened when human drivers had their vision momentarily interrupted by reflection or by glare caused by scattering or refraction.
 
Physics news on Phys.org
  • #52
russ_watters said:
I agree and I'm utterly shocked that autonomous cars have been allowed with very little government oversight so far. We don't know how they are programmed to respond to certain dangers or what Sophie's choices they would make. We don't know what they can and can't (a truck!) see.
The same can be said about human driven cars, word for word.
There were 29,989 fatal motor vehicle crashes in the United States in 2014 in which 32,675 deaths occurred.
This resulted in 10.2 deaths per 100,000 people and 1.08 deaths per 100 million vehicle miles traveled.
http://www.iihs.org/iihs/topics/t/general-statistics/fatalityfacts/state-by-state-overview
 
  • Like
Likes mfb
  • #53
mfb said:
Why?
How exactly do you expect having test data to make the software or simulations worse?
[emphasis added]
Huh? Was that a typo? Your previous question was "For any given point in time, do you expect anything to be better if we delay implementation of the technology?" How did "better" become "worse"?

I'm advocating more data/testing before release, not less. Better data and better simulations, resulting in safer cars. I can't imagine how my meaning could not be clear, but let me describe in some detail how I think the process should go. Some of this has probably happened, most not. First a little background on the problem. The car's control system must contain at least the following elements:
A. Sensor hardware to detect the car's surroundings.
B. Control output hardware to actually steer the car, depress the brakes, etc.
C. A computer for processing the needed logic:
D. Sensor interpretation logic to translate the sensor inputs into a real picture of the car's surroundings.
E. Control output logic to make the car follow the path it is placed on.
F. Decision-making logic to determine what to do when something doesn't go right.

Here's the development to release timelime that I think should be taking place:
1. Tesla builds and starts selling a car, the Model S, with all the hardware they think is needed for autonomous control. This actually happened in October of 2014.

2. Tesla gathers data from real-world driving of the car. The human is driving, the sensors just passively collecting data.

3. Tesla uses the data collected to write software for the various control components and create simulations to test the control logic.

4. Tesla installs the software to function passively in the cars. By this I mean the car's computer does everything but send the output to the steering/throttle/brake. The car records the data and compares the person's driving to the computer's simulation of the driving. This would flag major differences between behaviors so the software could be refined and point to different scenarios that might need to be worked-out in simulation.

5. Tesla deploys a beta test of the system using a fleet of trained and paid test "pilots" of the cars, similar to how Google had an employee behind the wheel of their Street View autonomous cars. These drivers would have training on the functional specs of the car and its behaviors -- but most of all, how to behave if they think the car may be malfunctioning (don't play "chicken" with it).

6. Tesla makes the necessary hardware and software revisions to finalize the car/software.

7. Tesla produces a report of the beta program's results, the functional specifications of the autopilot and a few test cars for the Insurance Institute and NHTSA for their approval.

8. The autopilot is enabled (this actually happened in October of 2015).

Each of these steps, IMO, should take 1-2 years (though some would overlap) and the total time from first test deployment to public release should take about a decade. Tesla actually enabled the feature about 1 year after the sensors started being installed in the cars, so it couldn't possibly have done much of anything with most of the steps and we know for sure they made zero hardware revisions to the first cars with the capability (which doesn't mean they haven't since improved later cars' sensor suites). Since the software is upgraded "over the air", the cars have some communication ability with Tesla, but how much I don't know. Suffice to say though the amount of data these cars generate and process would have to be in the gigabytes or terabytes per hour range. A truly massive data processing effort.

So again: Tesla has created and implemented the self-driving features really, really fast and using the public as guinea pigs. That, to me, is very irresponsible.
 
Last edited:
  • Like
Likes Pepper Mint
  • #54
eltodesukane said:
The same can be said about human driven cars, word for word.
Yes, but human drivers have something Tesla's autopilot doesn't: a license. I'm suggesting, in effect, that the Telsa autopilot should have a license before being allowed to drive.
 
  • #55
mheslep said:
Of what use is a would-be autonomous vehicle which requires full time monitoring? I see a wink-wink subtext in the admonishment to drivers from Tesla.
Agreed. It is irresponsible (and in most cases illegal) to watch a movie while your autopilot is driving the car. But it is also irresponsible to enable a feature that can be so easily misused (because it is a self-contradictory feature).

Moreover, a product that is in "beta" testing is by definition not ready for public release. It is spectacularly irresponsible to release a life safety critical device into the public domain if it isn't ready. All of our nuts and bolts and bolts discussion of the issue is kinda moot since Tesla readily admits the system is not ready to be released.
 
Last edited:
  • Like
Likes Greg Bernhardt
  • #56
mfb said:
Same as cruise control. You still have to pay attention, but you don't have to do the boring parts like adjusting your foot position by a few millimeters frequently.
The difference between "autonomous" and "cruise control" is pretty clear-cut: with cruise control, you must still be paying attention because you are still performing some of the driving functions yourself and the functions overlap so performing one (steering) means watching the others in a way that makes you easily able to respond as needed (not running into the guy in front of you). And, more importantly, you know that's your responsibility.

Autonomous, on the other hand, is autonomous.

Or, another way, with more details: With cruise control and other driver assist features, the driver has no choice but to maintain control over the car. Taking a nap isn't an option. So they are still wholly responsible for if the car crashes or drives safely. With an "autonomous" vehicle, even one that has the self-contradictory requirement of having a person maintain the capability to take back control, the car must be capable of operating safely on its own. Why? Because the reaction time of a person who doesn't expect to have to take over control is inherently longer than a person who does expect to have to take over control. For example, if you have your cruise control on and see brake lights in front of you, you know it is still up to you to apply the brakes. If you have the car in autopilot and see brake lights, you assume the car will brake as necessary and so you don't make an attempt to apply the brakes. In a great many accident scenarios, this delay will make it impossible for the human to prevent the accident if the car doesn't do its job. In the accident that started this thread, people might assume that the driver was so engrossed in his movie that he never saw the truck, but it is also quite possible that he saw the truck and waited to see how his car would react.

So not only is it irresponsible to provide people with this self-contradictory feature, Tesla's demand that the people maintain the responsibility for the car's driving is an inherently impossible demand to fulfill.
 
Last edited:
  • Like
Likes OCR and mheslep
  • #57
dipole said:
What do you know about the state of AI in regards to automated cars? Nothing I'm sure...
I know one thing, per Tesla's own description of it: it isn't ready for public release.
...but the developers of these systems and the companies investing millions of dollars in R&D into developing this technology seem confident that it is a solvable problem, and I'm willing to believe people who put their money where their mouth is.
I'm all for them solving the problem before implementing it.

Imagine you're a Tesla software engineer. You're the guy who is collecting and dealing with all the bug reports. I wonder if one of them already had on his list a "Can't see white objects against a bright but cloudy sky" bug on his list? How would you feel? How would the police feel about it if a known bug killed a person?
 
  • #58
russ_watters said:
mfb said:
Why?
How exactly do you expect having test data to make the software or simulations worse?
Huh? Was that a typo? Your previous question was "For any given point in time, do you expect anything to be better if we delay implementation of the technology?" How did "better" become "worse"?
It was not a typo. You seem to imply that not collecting data now (=having autopilot as option) would improve the technology. In other words, collecting data would make things worse. And I wonder why you expect that.
russ_watters said:
I'm advocating more data/testing before release, not less. Better data and better simulations, resulting in safer cars.
Your suggestion leads to more traffic deaths, in every single calendar year. Delaying the introduction of projects like the autopilot delays the development process. At the time of the implementation, it will be safer if the implementation is done later, yes - but up to then you have tens of thousands of deaths per year in the US alone, most of them avoidable. The roads in 2017 with autopilot will (likely) have fewer accidents and deaths than the roads in 2017 if autopilot would not exist on the roads. And the software is only getting better, while human drivers do not. In 2018 the software will be even better. How long do you want to delay implementation?
russ_watters said:
Yes, but human drivers have something Tesla's autopilot doesn't: a license. I'm suggesting, in effect, that the Telsa autopilot should have a license before being allowed to drive.
The autopilot would easily get the human driver's license if there was one limited to the roads autopilot can handle.
russ_watters said:
For example, if you have your cruise control on and see brake lights in front of you, you know it is still up to you to apply the brakes.
I can make the same assumption with more modern cruise control software that automatically keeps a safe distance to the other car. And be wrong in exactly the same way.
russ_watters said:
So not only is it irresponsible to provide people with this self-contradictory feature, Tesla's demand that the people maintain the responsibility for the car's driving is an inherently impossible demand to fulfill.
It is inherently impossible to watch the street if you don't have to steer? Seriously? In this case I cannot exist, because I can do that as passenger, and every driving instructor does it as part of their job on a daily basis.
 
  • #59
How does one guard against the mentality of "Hey y'all, hold my beer and watch this !" that an autopilot invites ?
Not build such overautomated machines
or make them increasingly idiot proof ?
jim hardy said:
With the eye scan technology we have, that autopilot sh could have been aware of where the driver was looking.

Increasing complexity is Mother Nature's way of "confusing our tongues".
 
  • #60
mfb said:
It was not a typo. You seem to imply that not collecting data now (=having autopilot as option) would improve the technology. In other words, collecting data would make things worse. And I wonder why you expect that.
I don't understand. I'm very explicitly saying I want to collect more data, not less data and you are repeating back to me that I want to collect less data and not more data. I'm really not sure where to go from here other than to request that you paint for me a detailed picture of the scenario you are proposing (or you think is what Tesla did) and then I can tell you how it differs from what I proposed.
Your suggestion leads to more traffic deaths, in every single calendar year.
That's an assumption on your part and for the sake of progressing the discussion, I'll assume it's true, even though it may not be.

Along a similar vein, the FDA stands in the way of new drug releases, also almost certainly costing lives (and certainly also saving lives). Why would they do such a despicable thing!? Because they need the new drug to have proven it is safe and effective before releasing it to the public.

See, with Tesla you are assuming it will, on day 1, be better than human drivers. It may be or it may not be. We don't know and they weren't required to prove it. But even if it was, are you really willing to extend that assumption to every car company? Are you willing to do human trials on an Isuzu or Pinto self driving car?

Verifying that something is safe and effective (whether a drug or a self-driving car) is generally done before releasing it to the public because it is irresponsible to assume it will be good instead of verifying it is good.
Delaying the introduction of projects like the autopilot delays the development process.
It doesn't delay the development process, it extends the development process prior to release. I think maybe the difference between what you are describing and what I am describing is that you are suggesting that development can occur using human trials. I think that suggestion is disturbing. But yeah, you are right: if you skip development and release products that are unfinished, you can release products faster.
At the time of the implementation, it will be safer if the implementation is done later, yes - but up to then you have tens of thousands of deaths per year in the US alone, most of them avoidable.
Maybe. And maybe not. But either way, yes, that's how responsible product development works.
I can make the same assumption with more modern cruise control software that automatically keeps a safe distance to the other car. And be wrong in exactly the same way.
Agreed. So I'd also like to know the development cycle and limitations of such systems too. But we're still two steps removed from the full autopilot there (1. The human knows there is a limit to the system's capabilities, 2. The human is still an inherently essential part of the driving system), but it is definitely a concern of mine.
It is inherently impossible to watch the street if you don't have to steer? Seriously?
Huh? I don't think you are reading my posts closely enough. I don't think what I was describing was that difficult and that bears no relation to it. Here's what I said: (it was in bold last time too): Because the reaction time of a person who doesn't expect to have to take over control is inherently longer than a person who does expect to have to take over control.

An auto-brake feature (for example) can reliably override a human if the human makes a mistake. A human cannot reliably override a computer if the computer makes a mistake. That's why it is unreasonable to expect/demand that a person be able to override the computer if it messes-up. There is nothing stopping them from trying, of course -- but they are very unlikely to reliably succeed.
 
  • #61
russ_watters said:
I don't understand. I'm very explicitly saying I want to collect more data, not less data and you are repeating back to me that I want to collect less data and not more data.
The difference is the time. You suggest to have less actual driving data at the end of 2016, less data at the end of 2017, and so on.

You suggest to have more test data at the time of the introduction. But that time is different for the two scenarios. That is the key point.

russ_watters said:
Along a similar vein, the FDA stands in the way of new drug releases, also almost certainly costing lives (and certainly also saving lives). Why would they do such a despicable thing!? Because they need the new drug to have proven it is safe and effective before releasing it to the public.
And how do you prove it? After laboratory and animal tests, you test it at progressively larger groups of humans who volunteer to test the new drug, first with healthy humans and then with those who have an application for the drug. That's exactly what Tesla did, where the lab/animal steps got replaced by test cars driving around somewhere at the company area.

russ_watters said:
See, with Tesla you are assuming it will, on day 1, be better than human drivers. It may be or it may not be. We don't know and they weren't required to prove it. But even if it was, are you really willing to extend that assumption to every car company? Are you willing to do human trials on an Isuzu or Pinto self driving car?
I personally would wait until those cars drove some hundreds of thousands of kilometers. But no matter how long I would wait, if the software requires the driver to pay attention, I would (a) do that and (b) even if I wouldn't, if the accident rate is not above that of human drivers, I wouldn't blame the company to release such a feature for tests.
russ_watters said:
But yeah, you are right: if you skip development and release products that are unfinished, you can release products faster.
When do you expect the development of self-driving cars to be finished? What does "finished" even mean? At a state where no one can imagine how to improve the software? Then we'll never get self-driving cars.

Airplane development is still ongoing, 100 years after the brothers Wright built the first proper airplane. Do we still have to wait to decrease accident rates further before we can use them? Yes this is an exaggerated example, but the concept is the same. Early airplanes had large accident rates. But without those early airplanes we would never have our modern airplanes with less than one deadly accident per million flights.

russ_watters said:
A human cannot reliably override a computer if the computer makes a mistake.
"Oh **** there is a truck in front of me, I'll brake no matter what the car would do" should work quite well. With some driving experience it's something you would not even have to think about when driving yourself.
 
  • #62
Jenab2 said:
How do you know when an object in the field-of-view has a signal-to-noise ratio too low to be detected?

It isn't just Tesla's autopilot that has that problem. Many accidents have happened when human drivers had their vision momentarily interrupted by reflection or by glare caused by scattering or refraction.
Humans don't entirely miss tractor trailers in daylight, clear weather for several seconds. Nor per reports did the vehicle in this case; the problem was the vehicle had no concept of what a truck is, mistaking the truck trailer's high ground clearance for clear path.
 
  • Like
Likes Monsterboy, russ_watters and nsaspook
  • #63
russ_watters said:
I know one thing, per Tesla's own description of it: it isn't ready for public release.

I'm all for them solving the problem before implementing it.
...
I'd not take their description of "beta" to necessarily mean they were admiting they intentionally released an unsafe vehicle. Rather, the unsafe observation comes about from the details of this accident. Tesla ships new software out to their cars frequently that's unrelated to autonomous driving, software that changes the like of charging time and ride height; they call these releases beta sometimes but it does not mean the details are safety critical
 
  • #64
mfb said:
...

"Oh **** there is a truck in front of me, I'll brake no matter what the car would do" should work quite well. With some driving experience it's something you would not even have to think about when driving yourself.

How does singling out a case of "truck in front" change anything? Riding autonomously one will routinely see dozens of obstacles requiring the brake to avoid a collision: the red light intersection, the downed tree, etc. One either gets ahead of the software in every possible case, manually applying brakes, or never without detailed knowledge of the SW corner cases. In the former case, routine manual override, of what use are autonomous vehicles?
 
  • #65
mfb said:
The difference is the time. You suggest to have less actual driving data at the end of 2016, less data at the end of 2017, and so on.

You suggest to have more test data at the time of the introduction. But that time is different for the two scenarios. That is the key point.
Ok, sure -- more data before release, which means less data per year. Yes. More real world data at a certain calendar year by releasing early, less data before release if we release it early. Yes: I favor more data/testing before release and less "pre-release testing" (if that even has any meaning anymore) on the public.
And how do you prove it? After laboratory and animal tests, you test it at progressively larger groups of humans who volunteer to test the new drug, first with healthy humans and then with those who have an application for the drug. That's exactly what Tesla did, where the lab/animal steps got replaced by test cars driving around somewhere at the company area.
They did? When? How much? Where can I download the report, application and see the government approval announcement?

Again: they are referring to the current release as a "beta test" and the "beta test" started a calendar year after the first cars with the sensors started to be sold. Drugs don't have "beta tests" that involve anyone who wants to try them and both cars and drugs take much, much longer than one year to develop and test.
I personally would wait until those cars drove some hundreds of thousands of kilometers.
[assuming Tesla did that, which I doubt...] You're not the CEO of Isuzu. What if he disagrees and thinks it should only take dozens of miles? Does he get to decide his car is ready for a public "alpha test"? How would we stop him from doing that?
But no matter how long I would wait, if the software requires the driver to pay attention, I would (a) do that...
Your reaction time is inferior to the computer's: you are incapable of paying enough attention to correct some mistakes made by the car. Also, the software doesn't require the driver to pay attention, the terms and conditions do. That's different from driver assist features, where the software literally requires the driver to pay attention. I'll provide more details in my next post.
...and (b) even if I wouldn't, if the accident rate is not above that of human drivers, I wouldn't blame the company to release such a feature for tests.
Catch-22: you can't know that until after it happens if you choose not to regulate the introduction of the feature.
When do you expect the development of self-driving cars to be finished? What does "finished" even mean?
The short answer (for an unregulated industry) is that development is "finished" when a company feels confident enough in it that it no longer needs to describe it as a "beta test". The long answer is detailed in post #53.
At a state where no one can imagine how to improve the software? Then we'll never get self-driving cars.
Obviously, that never happens for anything.
Airplane development is still ongoing, 100 years after the brothers Wright built the first proper airplane. Do we still have to wait to decrease accident rates further before we can use them? Yes this is an exaggerated example, but the concept is the same.
No it isn't. Airplanes are certified by the FAA to be allowed to fly passengers before they are allowed to fly passengers. The test process is exhaustive and no airplane is ever put into service while described as a "beta test". The process takes a decade (edit: the 787 took 8 years to develop, of which 2 years from first flight to FAA certification, plus 2 prior years of ground testing).
"Oh **** there is a truck in front of me, I'll brake no matter what the car would do" should work quite well.
No it won't. The car is supposed to do the braking for you, so the behavior you described runs contrary to how the system is supposed to work.
With some driving experience it's something you would not even have to think about when driving yourself.
The particular driver this thread is discussing won't be getting additional experience to learn that. Hopefully some other people will.
 
Last edited:
  • #66
mheslep said:
I'd not take their description of "beta" to necessarily mean they were admiting they intentionally released an unsafe vehicle. Rather, the unsafe observation comes about from the details of this accident.
I didn't say Tesla admitted it was "unsafe", I said they admitted it was not ready for public release. The dice roll comes from not knowing exactly how buggy the software is until it starts crashing on you. When that happens with a Windows beta release, not a big deal...but this crash killed someone.
 
  • #67
Regarding semi-autonomous vs fully autonomous driving features:

I downloaded an Audi A7 owners' manual here:
https://ownersmanuals2.com/get/audi-a7-sportback-s7-sportback-2016-owner-s-manual-65244

On page 87, it says "Adaptive cruise control will not make an emergency stop." This leaves primary responsibility firmly in the lap of the driver, though it does leave an obvious open question: what constitutes and "emergency stop?" Well, there is an indicator light for it and a section describing when "you must take control and brake", titled "Prompt for driver intervention." Presumably, the Tesla that crashed did not prompt the driver that his intervention was required.

The system also requires the driver to set the following distance and warns about the stopping time required increasing with speed -- so more attention is required.

For "braking guard", it says: "The braking guard is an assist system and cannot prevent a collission by itself. The driver must always intervene."

Here's what AAA has to say:
In What Situations Doesn’t It Work?
ACC systems are not designed to respond to stationary or particularly small objects. Camera-based systems can be affected by the time of day and weather conditions, whereas radar-based systems can be obstructed by ice or snow. Surveys have shown that relatively few drivers are aware of these types of limitations, and may overestimate the system’s protective benefit. Some drivers also have difficulty telling when ACC (vs. standard cruise control) is active.
https://www.aaafoundation.org/adaptive-cruise-control

A wired article on the issue:
IT HAPPENED WHILE I was reviewing a European automaker’s flagship luxury sedan.

I was creeping along on Interstate 93 in Boston, testing the active cruise-control system and marveling at the car’s ability to bring itself to a stop every time traffic halted. Suddenly an overly aggressive driver tried muscling into traffic ahead of me.

Instead of stopping, however, my big-ticket sedan moved forward as if that jerk wasn’t even there. I slammed on the brakes — stopping just in time — and immediately shut off the active cruise control.

It was a stark reminder that even the best technology has limits and a good example of why drivers should always pay attention behind the wheel. But it got me thinking:

What would’ve have happened if I hadn’t stopped?
http://www.wired.com/2011/06/active-safety-systems/

That article has good discussion of the definitions (autonomous vs semiautonomous or assist functions) and the logic they bring to bear. It continues:
Their only purpose is to slam on the brakes or steer the car back into your lane if a collision is imminent. In other words, if you notice the systems at work, you’re doing it wrong. Put down the phone and drive.

That's a key consideration. No automaker wants to see drivers fiddling with radios and cell phones, lulled into a false sense of security and complacency thinking the car will bail them out of trouble.
That was exactly the problem with the accident in question: the person driving thought the car would avoid trouble because it was supposed to be capable of avoiding trouble. Contrast that with the "assist" features, which *might* help you avoid trouble that you are already trying to avoid yourself.

These features are wholly different from an autopilot, which is intended to be fully autonomous.

It continues, for the next step of where this particular incident might go:
Of the three possibilities, Baker said liability would be easiest to prove if a system failed to work properly.

“It would be analogous to a ‘manufacturing defect,’ which is a slam dunk case for plaintiffs in the product liability context,” he said.
 
  • #68
Interestingly, here is a slide from a Tesla presentation a couple of months ago that has quite a few similarities with the path I described, if missing the last part (the approval):

tesla-autopilot-development-process.png


http://electrek.co/2016/05/24/tesla-autopilot-miles-data/

Given that the feature was activated one year after the hardware first went on sale, the "simulation" couldn't have been based on real world test data from the car and the "in-field performance validation" must have been less than one year.

Tesla sold roughly 50,000 model S's in 2015. At an average of 10,000 miles per year per car and a constant sales rate, that's 250 million miles of recorded data. With an average of 1 death per 100 million miles for normal cars, that means that at most all of this data would have recorded *3* deaths...at most 2 fatal crashes. That's not a lot and nowhere near enough to prove they are safer than human drivers (in deaths anyway).

And, of course, all of that assumes they are driven in autopilot in situations that are as risky as average -- which is almost certainly not true.
 
  • #69
mheslep said:
How does singling out a case of "truck in front" change anything? Riding autonomously one will routinely see dozens of obstacles requiring the brake to avoid a collision: the red light intersection, the downed tree, etc. One either gets ahead of the software in every possible case, manually applying brakes, or never without detailed knowledge of the SW corner cases. In the former case, routine manual override, of what use are autonomous vehicles?
I used the specific example, but it does not really matter. If the reason to react is visible ahead, a normally working self-driving car won't brake in the last possible second, leaving enough time for the driver to see that the situation might get problematic. If a problem appears suddenly and requires immediate reaction, the driver can simply react, as they do in normal cars as well. Why would you wait for the car?

Also keep in mind that we are discussing "beyond the safetly level of a human driving a car" already: we are discussing how to reduce the accident rate even further by combining the capabilities of the car with those of the human driver.
mheslep said:
In the former case, routine manual override, of what use are autonomous vehicles?
Emergency situations are not routine. Autonomous vehicles are more convenient to drive - currently their only advantage. Once the technology is more mature, completely driverless taxis, trucks and so on can have a huge economic impact.

russ_watters said:
On page 87, it says "Adaptive cruise control will not make an emergency stop."
How many drivers read page 87 of the manual? Even worse if you see the car slowing down under normal conditions, but cannot rely on in emergencies.

Concerning the approval: as you said already, regulations are behind. There is probably no formal approval process that Tesla could even apply for.
 
  • #70
mfb said:
...If the reason to react is visible ahead, a normally working self-driving car won't brake in the last possible second, leaving enough time for the driver to see that the situation might get problematic...
That sounds completely unreasonable, but perhaps I misunderstand. You suggest the driver monitor every pending significant action of the would-be autonomous vehicle, and in the, say, 2 or 3 seconds before a problem, if the vehicle fails to react in the first second or so the human driver should always stand ready to pounce? During test and development, this is indeed how the vehicles are tested in my experience, generally only at very low speeds. But I can't see any practical release to the public under those conditions.

Emergency situations are not routine.
The case of this truck turning across the highway *was* routine. The situation turned from routine to emergency in matter of seconds, as would most routine driving when grossly mishandled.
 
  • #71
mheslep said:
That sounds completely unreasonable, but perhaps I misunderstand. You suggest the driver monitor every pending significant action of the would-be autonomous vehicle, and in the, say, 2 or 3 seconds before a problem, if the vehicle fails to react in the first second or so the human driver should always stand ready to pounce?
At least for now, the driver should watch the street and be ready to take over if necessary. Yes. That's what Tesla requires the drivers to do, and for now it is a reasonable requirement.
The situation is not that different if you drive yourself on a highway for example: typically you adjust your speed and course slightly once in a while to stay in your lane and to keep the distance to the previous car, or to stay at your preferred speed. If an emergency situation occurs, you suddenly change that, and brake and/or steer. With Tesla, the main difference is that the slight speed and course adjustments are done by the car. It can also avoid emergency situations by braking before it gets dangerous. If an emergency situation comes up, brake - but chances are good the car starts braking before you can even react: an additional gain in safety.
mheslep said:
The case of this truck turning across the highway *was* routine.
Depends on how much warning time the car/driver had, something I don't know. Do you? If there was enough time, any driver watching the street could have started braking before it got dangerous. I mean, what do you expect drivers to do if they watch the street and the car does something wrong, wait happily until the car hits the truck at full speed?
 
  • #72
The WSJ article out today on the Tesla autopilot is superb, as the WSJ frequently is on industry topics. The article nails the game Tesla is playing with the autopilot, putting out official disclaimer due diligence statements on the autopilot on one hand, and on the other hand promoting its capabilities. There have been several severe accidents and some near misses avoided by drivers in the autopilot vehicles, though the one fatality.

The wonder-vehicle line:
In March 2015, Mr. Musk told reporters the system could make it possible in the future for Tesla vehicles to drive themselves “from parking lot to parking lot.”...

In April of this year, Mr. Musk retweeted a video by Mr. Brown that shows the Autopilot of his Model S preventing a crash...

Tesla has said its technology is the most advanced system on the market,...

The due-diligence line:
Tesla said the self-driving system worked exactly as it should have, citing data from the car’s “historical log file,” a document signed by Mr. Bennett, and an owner’s manual declaring the technology “cannot detect all objects and may not brake/decelerate for stationary vehicles.”...

Owner’s manuals state that the technology “is designed for your driving comfort and convenience and is not a collision warning or avoidance system.”...

customers are told explicitly what Autopilot is and isn’t capable of doing.
Oh, if it was in the owners manual, that's ok then. Ford should have put notice in their Pinto owner's manual years ago, that rear impacts could ignite the gas tank, and thereby avoided the 1.5 million vehicle recall.
 
  • Like
Likes russ_watters, Jaeusm and nsaspook
  • #73
mheslep said:
Oh, if it was in the owners manual, that's ok then. Ford should have put notice in their Pinto owner's manual years ago, that rear impacts could ignite the gas tank, and thereby avoided the 1.5 million vehicle recall.
You can use a Tesla without autopilot. You cannot use a Pinto without gas tank.
 
  • #74
  • Like
Likes mheslep
  • #75
Another 'autopilot' incident. This one seems to demonstrate how completely some people believe the autopilot hype.

http://abcnews.go.com/Business/wireStory/feds-seek-autopilot-data-tesla-crash-probe-40515954
The company said the Model X alerted the driver to put his hands on the wheel, but he didn't do it. "As road conditions became increasingly uncertain, the vehicle again alerted the driver to put his hands on the wheel. He did not do so and shortly thereafter the vehicle collided with a post on the edge of the roadway," the statement said.

The car negotiated a right curve and went off the road, traveling about 200 feet on the narrow shoulder, taking out 13 posts, Shope said.

The trooper did not cite the driver, saying he believed any citation would be voided because of the driver's claim that the car was on Autopilot.

https://teslamotorsclub.com/tmc/threads/my-friend-model-x-crash-bad-on-ap-yesterday.73308/
 
  • #78
mheslep said:
Musk has an auto response ready for autopilot accidents: "Not material"
http://fortune.com/2016/07/05/elon-musk-tesla-autopilot-stock-sale/
I'm not sure of the legality/insider trading implications of that, but it will be interesting to see if that goes anywhere. Something that has bothered me since the start but was more strongly/specifically worded in that article is this:
He [Musk] continued, “Indeed, if anyone bothered to do the math (obviously, you did not) they would realize that of the over 1M auto deaths per year worldwide, approximately half a million people would have been saved if the Tesla autopilot was universally available. Please, take 5 mins and do the bloody math before you write an article that misleads the public.”
The implicit claim (also implied in this thread) is that Tesla's fatalities per 100 million miles stat is comparable to the NHTSA's. Is it? That would surprise me a lot, given in particular Tesla's insistence that the autopilot should only be used in easy driving situations.

Here's the NHTSA's stats:
http://www-fars.nhtsa.dot.gov/Main/index.aspx

They don't differentiate different driving regimes (highway vs local roads for example), but they do include pedestrians and motorcyclists, which wouldn't seem an appropriate comparison. But given the low number of Tesla miles driven, if a Tesla kills a motorcyclist or pedestrian in the next couple of years, their fatality rate for those categories would be vastly higher than average.
 
  • #79
russ_watters said:
... Something that has bothered me since the start but was more strongly/specifically worded in that article is this:

The implicit claim (also implied in this thread) is that Tesla's fatalities per 100 million miles stat is comparable to the NHTSA's. Is it? That would surprise me a lot, given in particular Tesla's insistence that the autopilot should only be used in easy driving situations.

Here's the NHTSA's stats:
http://www-fars.nhtsa.dot.gov/Main/index.aspx
I find Musk's response to Fortune about accident rate as quoted both highly arrogant and invalid. A few researchers in the autonomous vehicle field have already pointed out that comparing Tesla data miles of a heavy and strongly constructed sedan to the variation in the world's vehicle including the like of motorcycles is invalid.

The arrogance comes about in justifying the macro effect of autonomous vehicles on accident rates at large while AVs might well take more lives in sub-groups of drivers.Musk is not entitled to make that decision. That is, several factors impact the accident rate given by NHTSA. One the highest is still alcohol, a factor in a ~third of all US accidents.
https://www.google.com/search?q=percentage+of+vehicle+accidents+involving+alcohol
Clearly, AVs, despite their under crossing truck blindness, could greatly lower the alcohol related accidents and thereby make AV stats look better.

But what about, say, drivers hauling their kids to school? Wild guess here, but among that group I'm guessing alcohol related accidents is zero point nothing, their tendency to do other things that raise the accident rate are also well below normal, like exceed the posted by 20 mph, run stops, let their tires go bald. Their accident rate is *not* the normal Musk says his AV can beat, saving the millions. If one of them in an AV drives under a truck and continues on as if nothing happened? Not material to Musk.
 
  • #80
NHTSA files formal request for crash information (about all crashes):
http://www.foxnews.com/leisure/2016/07/13/feds-examine-how-tesla-autopilot-reacts-to-crossing-traffic/?intcmp=hpffo&intcmp=obnetwork

It is due August 26 and carries fines if not responded to.
 
  • #81
mheslep said:
If one of them in an AV drives under a truck and continues on as if nothing happened?
Automated cars make different accidents. The dataset is too small to make precise comparisons to human drivers, but human drivers make accidents an automated car won't make (because it is never drunk for example), while automated vehicles make accidents a human driver won't make. Luckily you can combine both in a Tesla: read the manual and pay attention to the road if you use the autopilot. That way you can limit the accident rate to accidents where both autopilot and the human fail to react properly.
 
  • #82
mfb said:
Automated cars make different accidents. The dataset is too small to make precise comparisons to human drivers,
Yes, this is why Musk needs to stop making condescending comments about doing the math on AVs.

but human drivers make accidents an automated car won't make (because it is never drunk for example), while automated vehicles make accidents a human driver won't make. Luckily you can combine both in a Tesla: read the manual and pay attention to the road if you use the autopilot. That way you can limit the accident rate to accidents where both autopilot and the human fail to react properly.

One can't claim credit on the one hand for lower accident rates since an AV is never drunk while on the other demand drunk humans pay attention, else AV mistakes are the fault of humans not paying attention.

More generally, humans are lousy at paying attention while not being actively involved. Researchers in AVs keep repeating this.
 
Last edited:
  • Like
Likes nsaspook
  • #83
mheslep said:
One can't claim credit on the one hand for lower accident rates since an AV is never drunk while on the other demand drunk humans pay attention, else AV mistakes are the fault of humans not paying attention.
I would expect an AV with a drunk driver paying attention (sort of) to be better than a drunk driver driving. Both is illegal, of course.
mheslep said:
More generally, humans are lousy at paying attention while not being actively involved.
I wonder if you could involve them via some mandatory mini-games that involve watching traffic. Press a button every time a red car passes by, or whatever.
 
  • #84
mheslep said:
More generally, humans are lousy at paying attention while not being actively involved. Researchers in AVs keep repeating this.

The Tesla 'Autopilot' operates in the man-machine control loop at the point (physical driving process) where the brain during driving normally operates primarily below full awareness in the land of instincts and impulses. We learn to trust our internal human 'Autopilot' to evaluate the environment and warn the fully aware driving brain of danger in time to avoid problems.

The problem with the Tesla system IMO is trust. Tesla has managed to create a system (the entire human-machine interface) that seems so good it can be trusted to drive even if the manual says NO. Ok, we have little game clues to maintain driving focus but as long as the car handles those subconscious driving activities without intervention we know it's just a game instead of real driving.

https://hbr.org/2016/07/tesla-autopilot-and-the-challenge-of-trusting-machines
That decision — to trust or not — isn’t necessarily a rational one, and it’s not based only on the instructions people are given or even the way the machine is designed; as long as there have been cars, there have been people who anthropomorphize them. But the addition of technology that starts to feel like intelligence only furthers that inclination. We trust machines when we see something like ourselves in them — and the more we do that, the more we’re likely to believe they’re as capable as we are.

http://arstechnica.com/cars/2016/05...las-autopilot-forced-me-to-trust-the-machine/
It takes a while to get used to this feeling. Instead of serving as the primary means of direction for a car, you're now a meat-based backup and failsafe system. Instincts and impulses formed by more than two decades behind the wheel scream out a warning—"GRAB THE WHEEL NOW OR YOU'LL DIE"—while the rational forebrain fights back. Eventually, the voices quiet as the car starts to prove itself. When the road curves, the car follows. If the car is ever going too fast to negotiate the curve, it slows down and then accelerates smoothly back out of the turn.
 
Last edited:
  • #86
nsaspook said:
The Tesla 'Autopilot' operates in the man-machine control loop at the point (physical driving process) where the brain during driving normally operates primarily below full awareness in the land of instincts and impulses. We learn to trust our internal human 'Autopilot' to evaluate the environment and warn the fully aware driving brain of danger in time to avoid problems.

The problem with the Tesla system IMO is trust. Tesla has managed to create a system (the entire human-machine interface) that seems so good it can be trusted to drive even if the manual says NO. Ok, we have little game clues to maintain driving focus but as long a the car handles those subconscious driving activities without intervention we know it's just a game instead of real driving.
That's basically how I see it, though I would describe it as "responsibility" instead of trust (but it could be trust in who has responsibility...).

New back-up safety features like brake assist/overrides don't need to have that trust/responsibility because the person is still supposed to be fully in-charge: The computer assist happens after the human has already failed. Such systems can only improve safety because the human never has to think about who has responsibility.

On the other hand, systems where the computer has primary responsibility and the person "back-up" responsibility are inherrently flawed/self-contradictory because the person doesn't have the reaction speed necessary to take over if the computer fails. So the person's responsibility has to be either all or nothing.

This problem can be mitigated somewhat but can't be fixed by requiring a person to keep their hands on the wheel or with a notification that the driver has to take back control and that applies even to radar assisted cruise controls. Because it isn't the failures that the computer knows about that are the biggest problem, it is the ones it doesn't know about that are the biggest problem.

Sooner or later, a Tesla will notify its driver to take back control just as its front wheels go over the cliff. Elon won't be successful in using the "hey, it warned you!" defense there.
 
  • Like
Likes mheslep
  • #87
russ_watters said:
Consumer Reports calls on Tesla to disable autopilot:

http://www.usatoday.com/story/money...umer-reports-tesla-motors-autopilot/87075956/

Regaining Control
Research shows that humans are notoriously bad at re-engaging with complex tasks after their attention has been allowed to wander. According to a 2015 NHTSA study (PDF), it took test subjects anywhere from three to 17 seconds to regain control of a semi-autonomous vehicle when alerted that the car was no longer under the computer's control. At 65 mph, that's between 100 feet and quarter-mile traveled by a vehicle effectively under no one's control.

This is what’s known by researchers as the “Handoff Problem.” Google, which has been working on its Self-Driving Car Project since 2009, described the Handoff Problem in https://www.google.com/selfdrivingcar/files/reports/report-1015.pdf. "People trust technology very quickly once they see it works. As a result, it’s difficult for them to dip in and out of the task of driving when they are encouraged to switch off and relax,” said the report. “There’s also the challenge of context—once you take back control, do you have enough understanding of what’s going on around the vehicle to make the right decision?"
 
  • Like
Likes mheslep
  • #88
17 seconds is a long time, but 3 seconds is still too long and not surprising. Situational awareness requires history. A simple action like stomping on a brake can fix a lot of problems, but if the car is, for example, having trouble negotiating a curve, you have to get the feel of the handling before you can figure out how much steering to apply.
 
  • #89
Regaining Control
Research shows that humans are notoriously bad at re-engaging with complex tasks after their attention has been allowed to wander. According to a 2015 NHTSA study (PDF), it took test subjects anywhere from three to 17 seconds to regain control of a semi-autonomous vehicle when alerted that the car was no longer under the computer's control. ...

This is what’s known by researchers as the “Handoff Problem.” Google, which has been working on its Self-Driving Car Project since 2009, described the Handoff Problem in https://www.google.com/selfdrivingcar/files/reports/report-1015.pdf. "People trust technology very quickly once they see it works. As a result, it’s difficult for them to dip in and out of the task of driving when they are encouraged to switch off and relax,” said the report. “There’s also the challenge of context—once you take back control, do you have enough understanding of what’s going on around the vehicle to make the right decision?"

Thanks NSA. Of course. Of course. This is not a matter of dismissing the owners manual; it is simply impossible to change the attention behaviour of human beings as a group. If it were AVs would be moot.

Tesla's assertion that the driver must pay attention in autopilot mode, else accidents are the driver's fault, is rediculous on its face, akin to encouraging drivers to cruise around with no safety harness on, but to continually brace themselves in case of the one ton forces on impact. If you go out the windshield, you just didn't brace hard enough. Now stop bothering us, we're inventing the future.
 
  • #91
US-27A is a four-lane highway with a posted speed limit of 65 mph.
[...]
Tesla system performance data downloaded from the car indicated that vehicle speed just prior to impact was 74 mph.
How did that happen?
 
  • #92
mfb said:
How did that happen?
The Tesla autopilot does not pick the speed; the driver does.
 
  • #93
Odd. What happens if the driver sets it to 65 mph and then the speed limit gets lowered at some point of the road?
 
  • #94
mfb said:
Odd. What happens if the driver sets it to 65 mph and then the speed limit gets lowered at some point of the road?
I don't think the car responds. I read through some descriptions a few weeks ago and the actual capabilities are well short of what the hype had implied to me.
 
  • #95
Then drivers have to look for speed limits, which means they have to pay attention? scnr
 
  • #96
mfb said:
Then drivers have to look for speed limits, which means they have to pay attention? scnr
They are supposed to, yes.
 
  • #97
I'm also curious about how the driver's desired speed coordinates with the autopilot settings, but I doubt speeding (+9 mph in a 65 mph zone) was relevant to this accident. Tesla says the driver never hit the brakes anyway making the accident unavoidable by the driver at any reasonable speed, and this particular accident was not one where the vehicle came to a stop at impact.
 
  • #98
mheslep said:
I'm also curious about how the driver's desired speed coordinates with the autopilot settings, but I doubt speeding (+9 mph in a 65 mph zone) was relevant to this accident. Tesla says the driver never hit the brakes anyway making the accident unavoidable by the driver at any reasonable speed, and this particular accident was not one where the vehicle came to a stop at impact.
I've read several adaptive cruise control specs and recall that there is generally an upper limit to the setpoint.

I think what mfb is driving at though is that this is another reason why the driver has to pay attention. And I agree, particularly for the the of road in this case: a 4 lane local divided highway will often take you through towns where the speed limit drops to 35. If the driver doesn't pay attention, the car would speed right through.
 
  • Like
Likes mheslep
  • #99
There's something fundamentally wrong with enabling idiocy at the wheel .

Too much automation. That's how you get pilots who cannot land an airliner when autopilot gets confused.

A lane hold makes sense, as does a wing leveler, but i'd have a time limit on it and lock out simultaneous speed hold.

Try autopilot on this road at 74mph.
upload_2016-7-27_11-51-59.png
 
Last edited:
  • #100
russ_watters said:
I've read several adaptive cruise control specs and recall that there is generally an upper limit to the setpoint.

I think what mfb is driving at though is that this is another reason why the driver has to pay attention. And I agree, particularly for the the of road in this case: a 4 lane local divided highway will often take you through towns where the speed limit drops to 35. If the driver doesn't pay attention, the car would speed right through.
Apparently so for this version of the Tesla, though I don't know the details. Autonomous vehicles theoretically can use GPS and road data to know where speed changes occur (which would always have some disagreement with reality), and then the ability also exists for the vision system to "read" posted speed signs.

Video from Tesla's autonomous software vendor (until yesterday):



This too can have real world limitations that wouldn't catastrophically trip up people.

road-traffic-speed-limit-sign-obscured-by-hedge-cpmpde.jpg
 
Last edited:
Back
Top