The actual prospect of autonomous cars?

  • Thread starter Gear300
  • Start date
  • #36
phinds
Science Advisor
Insights Author
Gold Member
2022 Award
18,142
10,971
Personally, I think that towards the end of the century (perhaps sooner) we will definitely have the technology whereby fully autonomous vehicles will be considerably safer than those with drivers. BUT ... I am less confident that we will have fully worked out (1) sufficient societal acceptance, (2) the necessary infrastructure, and (3) the legal issues (insurance, etc).

Technology is, relatively speaking, the easy part.
 
  • Like
Likes russ_watters, PeroK and Bystander
  • #37
gmax137
Science Advisor
2,281
1,927
I also reject her main point that self driving cars get rear ended too often because they stop for objects on the road that are difficult to identify. I say that human drivers are at fault if they make snap decisions to run over some objects. That plastic bag on the road might contain a kitten. A child's ball rolling toward the road might be followed by a child. So if I stop for any object in or near the road and get rear ended, the collision is not my fault. Ditto for an AI driver.
Human drivers (good ones, anyway) look in their rear view mirror as they slam on the brakes to save the bunny in the road. If the car behind is too close or on their phone, the bunny loses.
 
  • Like
Likes russ_watters
  • #38
anorlunda
Staff Emeritus
Insights Author
11,194
8,591
Human drivers (good ones, anyway) look in their rear view mirror as they slam on the brakes to save the bunny in the road. If the car behind is too close or on their phone, the bunny loses.
So your defense would be: "Sorry your honor, that baby looked like a bunny to me."
 
  • Like
Likes russ_watters
  • #39
gmax137
Science Advisor
2,281
1,927
So your defense would be: "Sorry your honor, that baby looked like a bunny to me."
No, but that goes to the point: it is OK to run over some things in the road, but not OK for other things. There is an intelligent assessment of damage to the "object" in the road, damage to the car, damage from the following car. If there's no one in the oncoming lane, the best choice could be to cross the lines into that lane. Blindly braking hard whenever anything is in the road is too crude.
 
  • Like
Likes russ_watters
  • #40
phinds
Science Advisor
Insights Author
Gold Member
2022 Award
18,142
10,971
Personally, I think that towards the end of the century (perhaps sooner) we will definitely have the technology whereby fully autonomous vehicles will be considerably safer than those with drivers. BUT ... I am less confident that we will have fully worked out (1) sufficient societal acceptance, (2) the necessary infrastructure, and (3) the legal issues (insurance, etc).

Technology is, relatively speaking, the easy part.
I should add, one of my big concerns about autonomous cars is that people in the U.S. will put up with tens of thousands of car deaths involving human drivers (we do about 40,000 / year) but let one person get killed by an autonomous vehicle and the manufacturer will never hear the end of it and will be sued by the relatives.
 
  • #41
anorlunda
Staff Emeritus
Insights Author
11,194
8,591
No, but that goes to the point: it is OK to run over some things in the road, but not OK for other things. There is an intelligent assessment of damage to the "object" in the road, damage to the car, damage from the following car. If there's no one in the oncoming lane, the best choice could be to cross the lines into that lane. Blindly braking hard whenever anything is in the road is too crude.
It may be useful to look at how AI training data is gathered and used.

My understanding is that Tesla gathers data from every Tesla. Not just the autodrive equipped cars, but all of them. And most importantly when manually driven. Multiple times per hour, each car can record what I call triplet data packets and send them wirelessly to Tesla.
  1. What did the 8 cameras, looking all directions, see? Other sensors can be included; slippery roads yes/no?
  2. What action did the driver take? steering/throttle/brakes
  3. What was the outcome? Nothing/accident/full stop
Given 2.5 million Teslas on the road, and say 10 triplets per hour, then they might generate say 25 trillion triplet examples per year to train their AI. There would be multiple examples of "Something that looks like X in front, no car behind, driver braked, no accident." and examples of "Something that looks like X in front, car behind, driver swerved, no accident." plus examples, of "Something that looks like X in front, car stops, accident results." There is no need to analyze what X really is, just what it looks like to the camera. The AI is being "taught" by human drivers.

Neural network AI is not really intelligence, it is merely pattern matching. When the data sensed look like A, the non-accident action is B, and avoid accident causing action C. That is not "blindly" choosing any course of action. Just the opposite, it is using all available data.

Neural networks do not reason. They do not use logic. They merely match patterns of input data with desired outputs. They call that AI or intelligence for marketing purposes, but really that is a misnomer.

A garage door opener, that refuses to close when something blocks the light beam, is an example of a 1 branch neural net. The door opener advertisement may say "AI smart door", but we know better.
 

Suggested for: The actual prospect of autonomous cars?

Replies
4
Views
483
Replies
7
Views
661
  • Last Post
Replies
11
Views
397
Replies
13
Views
540
Replies
12
Views
396
  • Last Post
Replies
10
Views
409
  • Last Post
Replies
10
Views
419
  • Last Post
Replies
5
Views
337
Replies
10
Views
300
Top