News First fatal accident involving a car in self driving mode

Click For Summary
The first fatal accident involving a self-driving Tesla occurred when a tractor-trailer made a left turn in front of the vehicle, which failed to brake. Concerns were raised about the software's ability to detect obstacles, particularly those without a ground connection, leading to questions about its reliability in various driving scenarios. Despite the incident, statistics suggest that Tesla's autopilot may still have a lower accident rate compared to human drivers. The discussion highlighted the need for better oversight and regulation of autonomous vehicles, as well as the ethical implications of allowing such technology to operate without adequate human monitoring. Overall, the incident underscores the challenges and risks associated with the early deployment of self-driving technology.
  • #61
russ_watters said:
I don't understand. I'm very explicitly saying I want to collect more data, not less data and you are repeating back to me that I want to collect less data and not more data.
The difference is the time. You suggest to have less actual driving data at the end of 2016, less data at the end of 2017, and so on.

You suggest to have more test data at the time of the introduction. But that time is different for the two scenarios. That is the key point.

russ_watters said:
Along a similar vein, the FDA stands in the way of new drug releases, also almost certainly costing lives (and certainly also saving lives). Why would they do such a despicable thing!? Because they need the new drug to have proven it is safe and effective before releasing it to the public.
And how do you prove it? After laboratory and animal tests, you test it at progressively larger groups of humans who volunteer to test the new drug, first with healthy humans and then with those who have an application for the drug. That's exactly what Tesla did, where the lab/animal steps got replaced by test cars driving around somewhere at the company area.

russ_watters said:
See, with Tesla you are assuming it will, on day 1, be better than human drivers. It may be or it may not be. We don't know and they weren't required to prove it. But even if it was, are you really willing to extend that assumption to every car company? Are you willing to do human trials on an Isuzu or Pinto self driving car?
I personally would wait until those cars drove some hundreds of thousands of kilometers. But no matter how long I would wait, if the software requires the driver to pay attention, I would (a) do that and (b) even if I wouldn't, if the accident rate is not above that of human drivers, I wouldn't blame the company to release such a feature for tests.
russ_watters said:
But yeah, you are right: if you skip development and release products that are unfinished, you can release products faster.
When do you expect the development of self-driving cars to be finished? What does "finished" even mean? At a state where no one can imagine how to improve the software? Then we'll never get self-driving cars.

Airplane development is still ongoing, 100 years after the brothers Wright built the first proper airplane. Do we still have to wait to decrease accident rates further before we can use them? Yes this is an exaggerated example, but the concept is the same. Early airplanes had large accident rates. But without those early airplanes we would never have our modern airplanes with less than one deadly accident per million flights.

russ_watters said:
A human cannot reliably override a computer if the computer makes a mistake.
"Oh **** there is a truck in front of me, I'll brake no matter what the car would do" should work quite well. With some driving experience it's something you would not even have to think about when driving yourself.
 
Physics news on Phys.org
  • #62
Jenab2 said:
How do you know when an object in the field-of-view has a signal-to-noise ratio too low to be detected?

It isn't just Tesla's autopilot that has that problem. Many accidents have happened when human drivers had their vision momentarily interrupted by reflection or by glare caused by scattering or refraction.
Humans don't entirely miss tractor trailers in daylight, clear weather for several seconds. Nor per reports did the vehicle in this case; the problem was the vehicle had no concept of what a truck is, mistaking the truck trailer's high ground clearance for clear path.
 
  • Like
Likes Monsterboy, russ_watters and nsaspook
  • #63
russ_watters said:
I know one thing, per Tesla's own description of it: it isn't ready for public release.

I'm all for them solving the problem before implementing it.
...
I'd not take their description of "beta" to necessarily mean they were admiting they intentionally released an unsafe vehicle. Rather, the unsafe observation comes about from the details of this accident. Tesla ships new software out to their cars frequently that's unrelated to autonomous driving, software that changes the like of charging time and ride height; they call these releases beta sometimes but it does not mean the details are safety critical
 
  • #64
mfb said:
...

"Oh **** there is a truck in front of me, I'll brake no matter what the car would do" should work quite well. With some driving experience it's something you would not even have to think about when driving yourself.

How does singling out a case of "truck in front" change anything? Riding autonomously one will routinely see dozens of obstacles requiring the brake to avoid a collision: the red light intersection, the downed tree, etc. One either gets ahead of the software in every possible case, manually applying brakes, or never without detailed knowledge of the SW corner cases. In the former case, routine manual override, of what use are autonomous vehicles?
 
  • #65
mfb said:
The difference is the time. You suggest to have less actual driving data at the end of 2016, less data at the end of 2017, and so on.

You suggest to have more test data at the time of the introduction. But that time is different for the two scenarios. That is the key point.
Ok, sure -- more data before release, which means less data per year. Yes. More real world data at a certain calendar year by releasing early, less data before release if we release it early. Yes: I favor more data/testing before release and less "pre-release testing" (if that even has any meaning anymore) on the public.
And how do you prove it? After laboratory and animal tests, you test it at progressively larger groups of humans who volunteer to test the new drug, first with healthy humans and then with those who have an application for the drug. That's exactly what Tesla did, where the lab/animal steps got replaced by test cars driving around somewhere at the company area.
They did? When? How much? Where can I download the report, application and see the government approval announcement?

Again: they are referring to the current release as a "beta test" and the "beta test" started a calendar year after the first cars with the sensors started to be sold. Drugs don't have "beta tests" that involve anyone who wants to try them and both cars and drugs take much, much longer than one year to develop and test.
I personally would wait until those cars drove some hundreds of thousands of kilometers.
[assuming Tesla did that, which I doubt...] You're not the CEO of Isuzu. What if he disagrees and thinks it should only take dozens of miles? Does he get to decide his car is ready for a public "alpha test"? How would we stop him from doing that?
But no matter how long I would wait, if the software requires the driver to pay attention, I would (a) do that...
Your reaction time is inferior to the computer's: you are incapable of paying enough attention to correct some mistakes made by the car. Also, the software doesn't require the driver to pay attention, the terms and conditions do. That's different from driver assist features, where the software literally requires the driver to pay attention. I'll provide more details in my next post.
...and (b) even if I wouldn't, if the accident rate is not above that of human drivers, I wouldn't blame the company to release such a feature for tests.
Catch-22: you can't know that until after it happens if you choose not to regulate the introduction of the feature.
When do you expect the development of self-driving cars to be finished? What does "finished" even mean?
The short answer (for an unregulated industry) is that development is "finished" when a company feels confident enough in it that it no longer needs to describe it as a "beta test". The long answer is detailed in post #53.
At a state where no one can imagine how to improve the software? Then we'll never get self-driving cars.
Obviously, that never happens for anything.
Airplane development is still ongoing, 100 years after the brothers Wright built the first proper airplane. Do we still have to wait to decrease accident rates further before we can use them? Yes this is an exaggerated example, but the concept is the same.
No it isn't. Airplanes are certified by the FAA to be allowed to fly passengers before they are allowed to fly passengers. The test process is exhaustive and no airplane is ever put into service while described as a "beta test". The process takes a decade (edit: the 787 took 8 years to develop, of which 2 years from first flight to FAA certification, plus 2 prior years of ground testing).
"Oh **** there is a truck in front of me, I'll brake no matter what the car would do" should work quite well.
No it won't. The car is supposed to do the braking for you, so the behavior you described runs contrary to how the system is supposed to work.
With some driving experience it's something you would not even have to think about when driving yourself.
The particular driver this thread is discussing won't be getting additional experience to learn that. Hopefully some other people will.
 
Last edited:
  • #66
mheslep said:
I'd not take their description of "beta" to necessarily mean they were admiting they intentionally released an unsafe vehicle. Rather, the unsafe observation comes about from the details of this accident.
I didn't say Tesla admitted it was "unsafe", I said they admitted it was not ready for public release. The dice roll comes from not knowing exactly how buggy the software is until it starts crashing on you. When that happens with a Windows beta release, not a big deal...but this crash killed someone.
 
  • #67
Regarding semi-autonomous vs fully autonomous driving features:

I downloaded an Audi A7 owners' manual here:
https://ownersmanuals2.com/get/audi-a7-sportback-s7-sportback-2016-owner-s-manual-65244

On page 87, it says "Adaptive cruise control will not make an emergency stop." This leaves primary responsibility firmly in the lap of the driver, though it does leave an obvious open question: what constitutes and "emergency stop?" Well, there is an indicator light for it and a section describing when "you must take control and brake", titled "Prompt for driver intervention." Presumably, the Tesla that crashed did not prompt the driver that his intervention was required.

The system also requires the driver to set the following distance and warns about the stopping time required increasing with speed -- so more attention is required.

For "braking guard", it says: "The braking guard is an assist system and cannot prevent a collission by itself. The driver must always intervene."

Here's what AAA has to say:
In What Situations Doesn’t It Work?
ACC systems are not designed to respond to stationary or particularly small objects. Camera-based systems can be affected by the time of day and weather conditions, whereas radar-based systems can be obstructed by ice or snow. Surveys have shown that relatively few drivers are aware of these types of limitations, and may overestimate the system’s protective benefit. Some drivers also have difficulty telling when ACC (vs. standard cruise control) is active.
https://www.aaafoundation.org/adaptive-cruise-control

A wired article on the issue:
IT HAPPENED WHILE I was reviewing a European automaker’s flagship luxury sedan.

I was creeping along on Interstate 93 in Boston, testing the active cruise-control system and marveling at the car’s ability to bring itself to a stop every time traffic halted. Suddenly an overly aggressive driver tried muscling into traffic ahead of me.

Instead of stopping, however, my big-ticket sedan moved forward as if that jerk wasn’t even there. I slammed on the brakes — stopping just in time — and immediately shut off the active cruise control.

It was a stark reminder that even the best technology has limits and a good example of why drivers should always pay attention behind the wheel. But it got me thinking:

What would’ve have happened if I hadn’t stopped?
http://www.wired.com/2011/06/active-safety-systems/

That article has good discussion of the definitions (autonomous vs semiautonomous or assist functions) and the logic they bring to bear. It continues:
Their only purpose is to slam on the brakes or steer the car back into your lane if a collision is imminent. In other words, if you notice the systems at work, you’re doing it wrong. Put down the phone and drive.

That's a key consideration. No automaker wants to see drivers fiddling with radios and cell phones, lulled into a false sense of security and complacency thinking the car will bail them out of trouble.
That was exactly the problem with the accident in question: the person driving thought the car would avoid trouble because it was supposed to be capable of avoiding trouble. Contrast that with the "assist" features, which *might* help you avoid trouble that you are already trying to avoid yourself.

These features are wholly different from an autopilot, which is intended to be fully autonomous.

It continues, for the next step of where this particular incident might go:
Of the three possibilities, Baker said liability would be easiest to prove if a system failed to work properly.

“It would be analogous to a ‘manufacturing defect,’ which is a slam dunk case for plaintiffs in the product liability context,” he said.
 
  • #68
Interestingly, here is a slide from a Tesla presentation a couple of months ago that has quite a few similarities with the path I described, if missing the last part (the approval):

tesla-autopilot-development-process.png


http://electrek.co/2016/05/24/tesla-autopilot-miles-data/

Given that the feature was activated one year after the hardware first went on sale, the "simulation" couldn't have been based on real world test data from the car and the "in-field performance validation" must have been less than one year.

Tesla sold roughly 50,000 model S's in 2015. At an average of 10,000 miles per year per car and a constant sales rate, that's 250 million miles of recorded data. With an average of 1 death per 100 million miles for normal cars, that means that at most all of this data would have recorded *3* deaths...at most 2 fatal crashes. That's not a lot and nowhere near enough to prove they are safer than human drivers (in deaths anyway).

And, of course, all of that assumes they are driven in autopilot in situations that are as risky as average -- which is almost certainly not true.
 
  • #69
mheslep said:
How does singling out a case of "truck in front" change anything? Riding autonomously one will routinely see dozens of obstacles requiring the brake to avoid a collision: the red light intersection, the downed tree, etc. One either gets ahead of the software in every possible case, manually applying brakes, or never without detailed knowledge of the SW corner cases. In the former case, routine manual override, of what use are autonomous vehicles?
I used the specific example, but it does not really matter. If the reason to react is visible ahead, a normally working self-driving car won't brake in the last possible second, leaving enough time for the driver to see that the situation might get problematic. If a problem appears suddenly and requires immediate reaction, the driver can simply react, as they do in normal cars as well. Why would you wait for the car?

Also keep in mind that we are discussing "beyond the safetly level of a human driving a car" already: we are discussing how to reduce the accident rate even further by combining the capabilities of the car with those of the human driver.
mheslep said:
In the former case, routine manual override, of what use are autonomous vehicles?
Emergency situations are not routine. Autonomous vehicles are more convenient to drive - currently their only advantage. Once the technology is more mature, completely driverless taxis, trucks and so on can have a huge economic impact.

russ_watters said:
On page 87, it says "Adaptive cruise control will not make an emergency stop."
How many drivers read page 87 of the manual? Even worse if you see the car slowing down under normal conditions, but cannot rely on in emergencies.

Concerning the approval: as you said already, regulations are behind. There is probably no formal approval process that Tesla could even apply for.
 
  • #70
mfb said:
...If the reason to react is visible ahead, a normally working self-driving car won't brake in the last possible second, leaving enough time for the driver to see that the situation might get problematic...
That sounds completely unreasonable, but perhaps I misunderstand. You suggest the driver monitor every pending significant action of the would-be autonomous vehicle, and in the, say, 2 or 3 seconds before a problem, if the vehicle fails to react in the first second or so the human driver should always stand ready to pounce? During test and development, this is indeed how the vehicles are tested in my experience, generally only at very low speeds. But I can't see any practical release to the public under those conditions.

Emergency situations are not routine.
The case of this truck turning across the highway *was* routine. The situation turned from routine to emergency in matter of seconds, as would most routine driving when grossly mishandled.
 
  • #71
mheslep said:
That sounds completely unreasonable, but perhaps I misunderstand. You suggest the driver monitor every pending significant action of the would-be autonomous vehicle, and in the, say, 2 or 3 seconds before a problem, if the vehicle fails to react in the first second or so the human driver should always stand ready to pounce?
At least for now, the driver should watch the street and be ready to take over if necessary. Yes. That's what Tesla requires the drivers to do, and for now it is a reasonable requirement.
The situation is not that different if you drive yourself on a highway for example: typically you adjust your speed and course slightly once in a while to stay in your lane and to keep the distance to the previous car, or to stay at your preferred speed. If an emergency situation occurs, you suddenly change that, and brake and/or steer. With Tesla, the main difference is that the slight speed and course adjustments are done by the car. It can also avoid emergency situations by braking before it gets dangerous. If an emergency situation comes up, brake - but chances are good the car starts braking before you can even react: an additional gain in safety.
mheslep said:
The case of this truck turning across the highway *was* routine.
Depends on how much warning time the car/driver had, something I don't know. Do you? If there was enough time, any driver watching the street could have started braking before it got dangerous. I mean, what do you expect drivers to do if they watch the street and the car does something wrong, wait happily until the car hits the truck at full speed?
 
  • #72
The WSJ article out today on the Tesla autopilot is superb, as the WSJ frequently is on industry topics. The article nails the game Tesla is playing with the autopilot, putting out official disclaimer due diligence statements on the autopilot on one hand, and on the other hand promoting its capabilities. There have been several severe accidents and some near misses avoided by drivers in the autopilot vehicles, though the one fatality.

The wonder-vehicle line:
In March 2015, Mr. Musk told reporters the system could make it possible in the future for Tesla vehicles to drive themselves “from parking lot to parking lot.”...

In April of this year, Mr. Musk retweeted a video by Mr. Brown that shows the Autopilot of his Model S preventing a crash...

Tesla has said its technology is the most advanced system on the market,...

The due-diligence line:
Tesla said the self-driving system worked exactly as it should have, citing data from the car’s “historical log file,” a document signed by Mr. Bennett, and an owner’s manual declaring the technology “cannot detect all objects and may not brake/decelerate for stationary vehicles.”...

Owner’s manuals state that the technology “is designed for your driving comfort and convenience and is not a collision warning or avoidance system.”...

customers are told explicitly what Autopilot is and isn’t capable of doing.
Oh, if it was in the owners manual, that's ok then. Ford should have put notice in their Pinto owner's manual years ago, that rear impacts could ignite the gas tank, and thereby avoided the 1.5 million vehicle recall.
 
  • Like
Likes russ_watters, Jaeusm and nsaspook
  • #73
mheslep said:
Oh, if it was in the owners manual, that's ok then. Ford should have put notice in their Pinto owner's manual years ago, that rear impacts could ignite the gas tank, and thereby avoided the 1.5 million vehicle recall.
You can use a Tesla without autopilot. You cannot use a Pinto without gas tank.
 
  • #74
  • Like
Likes mheslep
  • #75
Another 'autopilot' incident. This one seems to demonstrate how completely some people believe the autopilot hype.

http://abcnews.go.com/Business/wireStory/feds-seek-autopilot-data-tesla-crash-probe-40515954
The company said the Model X alerted the driver to put his hands on the wheel, but he didn't do it. "As road conditions became increasingly uncertain, the vehicle again alerted the driver to put his hands on the wheel. He did not do so and shortly thereafter the vehicle collided with a post on the edge of the roadway," the statement said.

The car negotiated a right curve and went off the road, traveling about 200 feet on the narrow shoulder, taking out 13 posts, Shope said.

The trooper did not cite the driver, saying he believed any citation would be voided because of the driver's claim that the car was on Autopilot.

https://teslamotorsclub.com/tmc/threads/my-friend-model-x-crash-bad-on-ap-yesterday.73308/
 
  • #78
mheslep said:
Musk has an auto response ready for autopilot accidents: "Not material"
http://fortune.com/2016/07/05/elon-musk-tesla-autopilot-stock-sale/
I'm not sure of the legality/insider trading implications of that, but it will be interesting to see if that goes anywhere. Something that has bothered me since the start but was more strongly/specifically worded in that article is this:
He [Musk] continued, “Indeed, if anyone bothered to do the math (obviously, you did not) they would realize that of the over 1M auto deaths per year worldwide, approximately half a million people would have been saved if the Tesla autopilot was universally available. Please, take 5 mins and do the bloody math before you write an article that misleads the public.”
The implicit claim (also implied in this thread) is that Tesla's fatalities per 100 million miles stat is comparable to the NHTSA's. Is it? That would surprise me a lot, given in particular Tesla's insistence that the autopilot should only be used in easy driving situations.

Here's the NHTSA's stats:
http://www-fars.nhtsa.dot.gov/Main/index.aspx

They don't differentiate different driving regimes (highway vs local roads for example), but they do include pedestrians and motorcyclists, which wouldn't seem an appropriate comparison. But given the low number of Tesla miles driven, if a Tesla kills a motorcyclist or pedestrian in the next couple of years, their fatality rate for those categories would be vastly higher than average.
 
  • #79
russ_watters said:
... Something that has bothered me since the start but was more strongly/specifically worded in that article is this:

The implicit claim (also implied in this thread) is that Tesla's fatalities per 100 million miles stat is comparable to the NHTSA's. Is it? That would surprise me a lot, given in particular Tesla's insistence that the autopilot should only be used in easy driving situations.

Here's the NHTSA's stats:
http://www-fars.nhtsa.dot.gov/Main/index.aspx
I find Musk's response to Fortune about accident rate as quoted both highly arrogant and invalid. A few researchers in the autonomous vehicle field have already pointed out that comparing Tesla data miles of a heavy and strongly constructed sedan to the variation in the world's vehicle including the like of motorcycles is invalid.

The arrogance comes about in justifying the macro effect of autonomous vehicles on accident rates at large while AVs might well take more lives in sub-groups of drivers.Musk is not entitled to make that decision. That is, several factors impact the accident rate given by NHTSA. One the highest is still alcohol, a factor in a ~third of all US accidents.
https://www.google.com/search?q=percentage+of+vehicle+accidents+involving+alcohol
Clearly, AVs, despite their under crossing truck blindness, could greatly lower the alcohol related accidents and thereby make AV stats look better.

But what about, say, drivers hauling their kids to school? Wild guess here, but among that group I'm guessing alcohol related accidents is zero point nothing, their tendency to do other things that raise the accident rate are also well below normal, like exceed the posted by 20 mph, run stops, let their tires go bald. Their accident rate is *not* the normal Musk says his AV can beat, saving the millions. If one of them in an AV drives under a truck and continues on as if nothing happened? Not material to Musk.
 
  • #80
NHTSA files formal request for crash information (about all crashes):
http://www.foxnews.com/leisure/2016/07/13/feds-examine-how-tesla-autopilot-reacts-to-crossing-traffic/?intcmp=hpffo&intcmp=obnetwork

It is due August 26 and carries fines if not responded to.
 
  • #81
mheslep said:
If one of them in an AV drives under a truck and continues on as if nothing happened?
Automated cars make different accidents. The dataset is too small to make precise comparisons to human drivers, but human drivers make accidents an automated car won't make (because it is never drunk for example), while automated vehicles make accidents a human driver won't make. Luckily you can combine both in a Tesla: read the manual and pay attention to the road if you use the autopilot. That way you can limit the accident rate to accidents where both autopilot and the human fail to react properly.
 
  • #82
mfb said:
Automated cars make different accidents. The dataset is too small to make precise comparisons to human drivers,
Yes, this is why Musk needs to stop making condescending comments about doing the math on AVs.

but human drivers make accidents an automated car won't make (because it is never drunk for example), while automated vehicles make accidents a human driver won't make. Luckily you can combine both in a Tesla: read the manual and pay attention to the road if you use the autopilot. That way you can limit the accident rate to accidents where both autopilot and the human fail to react properly.

One can't claim credit on the one hand for lower accident rates since an AV is never drunk while on the other demand drunk humans pay attention, else AV mistakes are the fault of humans not paying attention.

More generally, humans are lousy at paying attention while not being actively involved. Researchers in AVs keep repeating this.
 
Last edited:
  • Like
Likes nsaspook
  • #83
mheslep said:
One can't claim credit on the one hand for lower accident rates since an AV is never drunk while on the other demand drunk humans pay attention, else AV mistakes are the fault of humans not paying attention.
I would expect an AV with a drunk driver paying attention (sort of) to be better than a drunk driver driving. Both is illegal, of course.
mheslep said:
More generally, humans are lousy at paying attention while not being actively involved.
I wonder if you could involve them via some mandatory mini-games that involve watching traffic. Press a button every time a red car passes by, or whatever.
 
  • #84
mheslep said:
More generally, humans are lousy at paying attention while not being actively involved. Researchers in AVs keep repeating this.

The Tesla 'Autopilot' operates in the man-machine control loop at the point (physical driving process) where the brain during driving normally operates primarily below full awareness in the land of instincts and impulses. We learn to trust our internal human 'Autopilot' to evaluate the environment and warn the fully aware driving brain of danger in time to avoid problems.

The problem with the Tesla system IMO is trust. Tesla has managed to create a system (the entire human-machine interface) that seems so good it can be trusted to drive even if the manual says NO. Ok, we have little game clues to maintain driving focus but as long as the car handles those subconscious driving activities without intervention we know it's just a game instead of real driving.

https://hbr.org/2016/07/tesla-autopilot-and-the-challenge-of-trusting-machines
That decision — to trust or not — isn’t necessarily a rational one, and it’s not based only on the instructions people are given or even the way the machine is designed; as long as there have been cars, there have been people who anthropomorphize them. But the addition of technology that starts to feel like intelligence only furthers that inclination. We trust machines when we see something like ourselves in them — and the more we do that, the more we’re likely to believe they’re as capable as we are.

http://arstechnica.com/cars/2016/05...las-autopilot-forced-me-to-trust-the-machine/
It takes a while to get used to this feeling. Instead of serving as the primary means of direction for a car, you're now a meat-based backup and failsafe system. Instincts and impulses formed by more than two decades behind the wheel scream out a warning—"GRAB THE WHEEL NOW OR YOU'LL DIE"—while the rational forebrain fights back. Eventually, the voices quiet as the car starts to prove itself. When the road curves, the car follows. If the car is ever going too fast to negotiate the curve, it slows down and then accelerates smoothly back out of the turn.
 
Last edited:
  • #86
nsaspook said:
The Tesla 'Autopilot' operates in the man-machine control loop at the point (physical driving process) where the brain during driving normally operates primarily below full awareness in the land of instincts and impulses. We learn to trust our internal human 'Autopilot' to evaluate the environment and warn the fully aware driving brain of danger in time to avoid problems.

The problem with the Tesla system IMO is trust. Tesla has managed to create a system (the entire human-machine interface) that seems so good it can be trusted to drive even if the manual says NO. Ok, we have little game clues to maintain driving focus but as long a the car handles those subconscious driving activities without intervention we know it's just a game instead of real driving.
That's basically how I see it, though I would describe it as "responsibility" instead of trust (but it could be trust in who has responsibility...).

New back-up safety features like brake assist/overrides don't need to have that trust/responsibility because the person is still supposed to be fully in-charge: The computer assist happens after the human has already failed. Such systems can only improve safety because the human never has to think about who has responsibility.

On the other hand, systems where the computer has primary responsibility and the person "back-up" responsibility are inherrently flawed/self-contradictory because the person doesn't have the reaction speed necessary to take over if the computer fails. So the person's responsibility has to be either all or nothing.

This problem can be mitigated somewhat but can't be fixed by requiring a person to keep their hands on the wheel or with a notification that the driver has to take back control and that applies even to radar assisted cruise controls. Because it isn't the failures that the computer knows about that are the biggest problem, it is the ones it doesn't know about that are the biggest problem.

Sooner or later, a Tesla will notify its driver to take back control just as its front wheels go over the cliff. Elon won't be successful in using the "hey, it warned you!" defense there.
 
  • Like
Likes mheslep
  • #87
russ_watters said:
Consumer Reports calls on Tesla to disable autopilot:

http://www.usatoday.com/story/money...umer-reports-tesla-motors-autopilot/87075956/

Regaining Control
Research shows that humans are notoriously bad at re-engaging with complex tasks after their attention has been allowed to wander. According to a 2015 NHTSA study (PDF), it took test subjects anywhere from three to 17 seconds to regain control of a semi-autonomous vehicle when alerted that the car was no longer under the computer's control. At 65 mph, that's between 100 feet and quarter-mile traveled by a vehicle effectively under no one's control.

This is what’s known by researchers as the “Handoff Problem.” Google, which has been working on its Self-Driving Car Project since 2009, described the Handoff Problem in https://www.google.com/selfdrivingcar/files/reports/report-1015.pdf. "People trust technology very quickly once they see it works. As a result, it’s difficult for them to dip in and out of the task of driving when they are encouraged to switch off and relax,” said the report. “There’s also the challenge of context—once you take back control, do you have enough understanding of what’s going on around the vehicle to make the right decision?"
 
  • Like
Likes mheslep
  • #88
17 seconds is a long time, but 3 seconds is still too long and not surprising. Situational awareness requires history. A simple action like stomping on a brake can fix a lot of problems, but if the car is, for example, having trouble negotiating a curve, you have to get the feel of the handling before you can figure out how much steering to apply.
 
  • #89
Regaining Control
Research shows that humans are notoriously bad at re-engaging with complex tasks after their attention has been allowed to wander. According to a 2015 NHTSA study (PDF), it took test subjects anywhere from three to 17 seconds to regain control of a semi-autonomous vehicle when alerted that the car was no longer under the computer's control. ...

This is what’s known by researchers as the “Handoff Problem.” Google, which has been working on its Self-Driving Car Project since 2009, described the Handoff Problem in https://www.google.com/selfdrivingcar/files/reports/report-1015.pdf. "People trust technology very quickly once they see it works. As a result, it’s difficult for them to dip in and out of the task of driving when they are encouraged to switch off and relax,” said the report. “There’s also the challenge of context—once you take back control, do you have enough understanding of what’s going on around the vehicle to make the right decision?"

Thanks NSA. Of course. Of course. This is not a matter of dismissing the owners manual; it is simply impossible to change the attention behaviour of human beings as a group. If it were AVs would be moot.

Tesla's assertion that the driver must pay attention in autopilot mode, else accidents are the driver's fault, is rediculous on its face, akin to encouraging drivers to cruise around with no safety harness on, but to continually brace themselves in case of the one ton forces on impact. If you go out the windshield, you just didn't brace hard enough. Now stop bothering us, we're inventing the future.
 

Similar threads

  • · Replies 28 ·
Replies
28
Views
2K
  • · Replies 293 ·
10
Replies
293
Views
22K
  • · Replies 22 ·
Replies
22
Views
5K
  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 24 ·
Replies
24
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 19 ·
Replies
19
Views
12K
Replies
7
Views
3K
Replies
1
Views
5K
  • · Replies 9 ·
Replies
9
Views
8K