The actual prospect of autonomous cars?

  • Thread starter Thread starter Gear300
  • Start date Start date
  • Tags Tags
    Cars
Click For Summary
The discussion centers on the feasibility of achieving fully autonomous vehicles by the mid-century, highlighting the technical and social challenges involved. Participants express skepticism about current machine learning capabilities and the need for advanced infrastructure to support autonomous driving. Concerns are raised regarding the reliability of technology in complex driving situations, emphasizing that human oversight may still be necessary. The conversation also touches on the potential for autonomous systems to improve traffic flow, though achieving significant increases in vehicle throughput is viewed as overly optimistic. Ultimately, the need for a clear problem statement and standards for performance is underscored as crucial for advancing autonomous vehicle technology.
  • #31
Office_Shredder said:
If your traffic flows more smoothly you won't get jams that cause cars to pile up on the road like that in the first place.
That's the beauty of trains: there's one controller (the train engineer) for all of the throttles and all of the brakes. Auto traffic has to contend with each individual driver braking almost at random, and the reaction times stack up, inevitably leading to backups.

I'm slowly convincing myself that autonomous cars, under some kind of central control, could have potential...
 
Physics news on Phys.org
  • #32
It is necessary to consider the period of transition from human to autonomous control. There will be a significant period while mixed traffic is using the same infrastructure. The end point must be reached via a safe series of functional transitions.
 
  • Like
Likes FactChecker
  • #33
I think the development will play out such that there will be 2 modes of operation - one for main arterial roads and one for non-arterial rodes. The main arterial roads will be well-mapped, and even some transponder devices could be planted along them to aid in navigation, and so the operation can be at full speed. The non-arterial roads will be mapped to some extent, and so the operation will be quite slow (like 15 mph) just to be safe. I can see new subsivisions being built (or existing ones that are self-contained being upgraded) with transponders to make operation there be at the regular speed.

I fully expect to own such a driverless car by 2030. :smile:
 
  • #34
I would suggest this clip from EconTalk, featuring computer scientist Melanie Mitchell specifically discussing the prospects of self-driving cars, as part of a broader discussion from her book, "Artificial Intelligence: A Guide for Thinking Humans" (which I've read and highly recommend):



Her conclusion (from the book, and from the brief clip here) is that true trustworthy fully self-driving cars are still a long way away, as such a system must be able to react to unpredictable events that they could not have been able to be trained on from training and test cases.
 
Last edited:
  • #35
StatGuy2000 said:
I would suggest this clip from EconTalk, featuring computer scientist Melanie Mitchell
Her opening statement includes a questionable assertion.
"Problems that humans can solve because they have knowledge. Let's say. But machines can't solve because data is not knowledge."

I also reject her main point that self driving cars get rear ended too often because they stop for objects on the road that are difficult to identify. I say that human drivers are at fault if they make snap decisions to run over some objects. That plastic bag on the road might contain a kitten. A child's ball rolling toward the road might be followed by a child. So if I stop for any object in or near the road and get rear ended, the collision is not my fault. Ditto for an AI driver.
 
  • Like
Likes swampwiz and russ_watters
  • #36
Personally, I think that towards the end of the century (perhaps sooner) we will definitely have the technology whereby fully autonomous vehicles will be considerably safer than those with drivers. BUT ... I am less confident that we will have fully worked out (1) sufficient societal acceptance, (2) the necessary infrastructure, and (3) the legal issues (insurance, etc).

Technology is, relatively speaking, the easy part.
 
  • Like
Likes russ_watters, PeroK and Bystander
  • #37
anorlunda said:
I also reject her main point that self driving cars get rear ended too often because they stop for objects on the road that are difficult to identify. I say that human drivers are at fault if they make snap decisions to run over some objects. That plastic bag on the road might contain a kitten. A child's ball rolling toward the road might be followed by a child. So if I stop for any object in or near the road and get rear ended, the collision is not my fault. Ditto for an AI driver.
Human drivers (good ones, anyway) look in their rear view mirror as they slam on the brakes to save the bunny in the road. If the car behind is too close or on their phone, the bunny loses.
 
  • Like
Likes russ_watters
  • #38
gmax137 said:
Human drivers (good ones, anyway) look in their rear view mirror as they slam on the brakes to save the bunny in the road. If the car behind is too close or on their phone, the bunny loses.
So your defense would be: "Sorry your honor, that baby looked like a bunny to me."
 
  • Like
Likes russ_watters
  • #39
anorlunda said:
So your defense would be: "Sorry your honor, that baby looked like a bunny to me."
No, but that goes to the point: it is OK to run over some things in the road, but not OK for other things. There is an intelligent assessment of damage to the "object" in the road, damage to the car, damage from the following car. If there's no one in the oncoming lane, the best choice could be to cross the lines into that lane. Blindly braking hard whenever anything is in the road is too crude.
 
  • Like
Likes russ_watters
  • #40
phinds said:
Personally, I think that towards the end of the century (perhaps sooner) we will definitely have the technology whereby fully autonomous vehicles will be considerably safer than those with drivers. BUT ... I am less confident that we will have fully worked out (1) sufficient societal acceptance, (2) the necessary infrastructure, and (3) the legal issues (insurance, etc).

Technology is, relatively speaking, the easy part.
I should add, one of my big concerns about autonomous cars is that people in the U.S. will put up with tens of thousands of car deaths involving human drivers (we do about 40,000 / year) but let one person get killed by an autonomous vehicle and the manufacturer will never hear the end of it and will be sued by the relatives.
 
  • #41
gmax137 said:
No, but that goes to the point: it is OK to run over some things in the road, but not OK for other things. There is an intelligent assessment of damage to the "object" in the road, damage to the car, damage from the following car. If there's no one in the oncoming lane, the best choice could be to cross the lines into that lane. Blindly braking hard whenever anything is in the road is too crude.
It may be useful to look at how AI training data is gathered and used.

My understanding is that Tesla gathers data from every Tesla. Not just the autodrive equipped cars, but all of them. And most importantly when manually driven. Multiple times per hour, each car can record what I call triplet data packets and send them wirelessly to Tesla.
  1. What did the 8 cameras, looking all directions, see? Other sensors can be included; slippery roads yes/no?
  2. What action did the driver take? steering/throttle/brakes
  3. What was the outcome? Nothing/accident/full stop
Given 2.5 million Teslas on the road, and say 10 triplets per hour, then they might generate say 25 trillion triplet examples per year to train their AI. There would be multiple examples of "Something that looks like X in front, no car behind, driver braked, no accident." and examples of "Something that looks like X in front, car behind, driver swerved, no accident." plus examples, of "Something that looks like X in front, car stops, accident results." There is no need to analyze what X really is, just what it looks like to the camera. The AI is being "taught" by human drivers.

Neural network AI is not really intelligence, it is merely pattern matching. When the data sensed look like A, the non-accident action is B, and avoid accident causing action C. That is not "blindly" choosing any course of action. Just the opposite, it is using all available data.

Neural networks do not reason. They do not use logic. They merely match patterns of input data with desired outputs. They call that AI or intelligence for marketing purposes, but really that is a misnomer.

A garage door opener, that refuses to close when something blocks the light beam, is an example of a 1 branch neural net. The door opener advertisement may say "AI smart door", but we know better.
 

Similar threads

Replies
17
Views
5K
Replies
10
Views
4K
Replies
1
Views
2K
  • · Replies 12 ·
Replies
12
Views
6K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 108 ·
4
Replies
108
Views
19K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 82 ·
3
Replies
82
Views
27K
  • · Replies 4 ·
Replies
4
Views
10K