Do you feel safer with self-driving cars on the road?

In summary, the conversation discusses the limitations and potential benefits of self-driving cars. Some individuals are skeptical and believe that human drivers are still necessary for safe driving, while others argue that self-driving cars could potentially improve safety on the road. The conversation also touches on the idea of feeling safe versus actually being safe, and the potential for self-driving cars to handle complex situations involving pedestrians. There is also mention of the development and progress of self-driving car technology, and differing opinions on when it will become widely adopted.

Do you feel safer with self-driving cars on the road?

  • Yes

    Votes: 31 41.3%
  • No

    Votes: 37 49.3%
  • No opinion

    Votes: 7 9.3%

  • Total voters
    75
  • #141
Jetflyer0 said:
Yes, I meant once they are all self driving and can tell the routes of other cars that may affect their own route then adjust accordingly. Search self-driving cars intersection gif for an idea of what I mean
Totality won't happen for a very long time. But can you not agree as more diver-less cars take to the road the safer it gets?
 
Computer science news on Phys.org
  • #142
Greg Bernhardt said:
Totality won't happen for a very long time. But can you not agree as more diver-less cars take to the road the safer it gets?
Yeah, the less unpredictable driving patterns there are, the safer the roads will become
 
  • #143
Jetflyer0 said:
I won't really feel safe with self-driving cars until they are all ai controlled and can communicate all their data.
Greg Bernhardt said:
Which self driving cars are not ai controlled and don’t communicate data? Do you mean until all cars are self driving?
Addressed before: (...)
Stavros Kiri said:
Yes I do feel kind of safe, and I do want to kind of want to trust them, but I will feel a lot safer and almost trust them completely when/if, perhaps in the near future (by 2020 or so), as part of the internet of things, all(?) cars will be self-driving and controlled/coordinated via a superfast 5G mobile net, to avoid all accidents (in highways, city and rural roads etc.) ...
Maybe it will take longer though, to fully practically apply ... (2025 or so ...)
 
Last edited:
  • #144
Night Vision for Self-Driving Cars,

"Elon Musk famously thinks that cars can be made to drive themselves without relying on expensive laser-ranging lidars. But while Tesla is moving ahead with one fewer sensor than most self-driving car companies, a new startup wants them to add yet another—an infrared camera."

AdaSky is developing a far infrared thermal camera called Viper that it says can expand the conditions that automated cars will be able to operate in, and improve safety.

“Today’s sensors are not good enough for fully self-driving cars and that’s where we come in,” says Dror Meiri, vice president of business development at AdaSky. “We think infrared (IR) technology can bridge the gap from Level 3 all the way to Levels 4 and 5.”

...AdaSky says its system is currently being evaluated by several car companies and suppliers, and the startup hopes to start mass production in 2020 or 2021.

From, https://spectrum.ieee.org/cars-that...driving/do-selfdriving-cars-need-night-vision

From, https://www.google.com/search?q=how...ome..69i57.10583j0j8&sourceid=chrome&ie=UTF-8

Self driving cars don't have to be better then the best driver to save lives, they just need to be better then the average distracted, sleepy, texting, slightly drunk Joe or Jill.

Watch Yamaha's Humanoid Robot Ride a Motorcycle Around a Racetrack


Mjk3NDQyNw.jpg


From,https://spectrum.ieee.org/cars-that...id-robot-ride-a-motorcycle-around-a-racetrack

What will they think of next?
 

Attachments

  • Mjk3NDQyNw.jpg
    Mjk3NDQyNw.jpg
    52.6 KB · Views: 328
  • #145
Spinnor said:
...while Tesla is moving ahead...
Just that one, so far... right ??
 
  • #146
OCR said:
Just that one, so far... right ??

Just one, what? :smile:
 
Last edited by a moderator:
  • #147
I've almost been hit intentionally by a car (I was on a bike in the bicyle path, impossible to bother any car) 2 days ago. I lost an aunt about 6 years ago because an adolescent decided to suicide himself on the highway with a front to front collision. Overall cars are less secure than train, planes, etc.
A car is a weapon and from what I read and see, humans are pretty bad at using it. Many do not care about other's lives.
I'd prefer all cars driven by AI. I'd feel more secure, yes.
 
  • #149
fluidistic said:
A car is a weapon and from what I read and see, humans are pretty bad at using it. Many do not care about other's lives.
I'd prefer all cars driven by AI. I'd feel more secure, yes.
I understand your fear about a human driving a car. But what makes you think it is better with AI?

If you would have almost been hit intentionally by a car driven by AI 2 days ago, or that your aunt would have been in an accident involving AI malfunction instead of an adolescent "malfunction", would it make you feel better?

NO matter what, a car will always be a "weapon" and the fact that "humans are pretty bad at using it" (IMHO, with the many millions trips done each day around the world, I think they have a pretty good record) will never mean AI is pretty good at using it.

One thing's for sure: AI will NEVER care about other's lives.
 
  • #150
Spinnor said:
Just one, what? :smile:
a-head
A joke, that I knew wasn't... . o_O
 
  • #151
jack action said:
I understand your fear about a human driving a car. But what makes you think it is better with AI?"
The AI respect of the traffic laws. A quicker and better judgement than any human. Its face recognition would determine the age of anyone nearby and in case it is involved in a huge accident mess, make better judgements to save lives. I.e. let live the younger people, and possibly female ones. The life of a 13 years old boy is worth more than a 113 years old lady, at least according to the AI, would make sense to me. I know this is highly debatable and what not, I'm just giving my subjective opinion here. To me this is much better than a human who doesn't have time to think on who to save, who to kill, because of the lack of time to think.

jack action said:
If you would have almost been hit intentionally by a car driven by AI 2 days ago, or that your aunt would have been in an accident involving AI malfunction instead of an adolescent "malfunction", would it make you feel better?
If this implies the AI did that on purpose in order to save others lives, which according to its algorithms indicates a higher priority, then yes. I would accept the result (unless I'm dead of course), but these events should be extremely rare.

jack action said:
NO matter what, a car will always be a "weapon" and the fact that "humans are pretty bad at using it" (IMHO, with the many millions trips done each day around the world, I think they have a pretty good record) will never mean AI is pretty good at using it.
I agree with the sentence but I do not agree about the conclusion that over a million people worldwide die each year due to car accidents is acceptable. This is a pretty bad record. Most of these lives could have been saved.

jack action said:
One thing's for sure: AI will NEVER care about other's lives.
Why do you think so? Caring about people's lives is something that cannot be coded/programmed?
 
Last edited by a moderator:
  • #152
fluidistic said:
The AI respect of the traffic laws. A ...
Values judgements of whose life is more important leads nowhere except to a rats nest of if's and but's. It is certainly not as simplistic as encountered in the movie IRobot ( "Save the girl!" ). If one wants to give a particular AI Unit the "power" to decide the validity of termination or extension of a person's life out of several individuals in a precarious situation, then that Unit will also have to be accountable for the decision made. That accountability is through legal means for human individuals ( and other entities ) granted legal status.
The extension and granting AI Units the legal status as a person respecting the laws of the land would also have to extend to them the prosecution under the law, be it either under criminal or civil law. I doubt if we are there yet, or ready for that. AI certainly is not.

Traffic laws are put in place by human decision making, and as such are not an ultimate panacea.
Humans have decided, whether correctly or not that, that the time factor risk in their lives is of an importance ( not saying equal ) just as much as the fatality risk, ( along with many other risks ).

Even without AI, we could, right now, to reduce fatalities,
Reduce maximum speed limits to such an extent that any collision is just a fender bender.
Put Stop signs at every intersection.
Put a traffic cop at every intersection.
Put a helmet on every person in the car.
... etc.
We don't do that simply because we value time and money, and somewhat overlook the chances of being hurt in a car or on the road.
( Bicyclists, generally, are probably the worst on the de-valuating of the risk of being on the road, being at the loosing end of an incident be it either with a vehicle or an obstacle on the road. )

To make the commute and pleasure drive safer, generally the vehicles have added features such as seat belts and air bags and child seats for occupant protection. The vehicles themselves have added better tires, crumple zones, better lighting and all sorts of other things for handling and visibility and endurance. An AI vehicle is an extension of these safety features, not a replacement ( Or is it - if vehicles become so collision avoidant with an AI controlling all the shots, some of the other features can be compromised to lower the cost, or even entirely eliminated ie - why pay for redundant headlights if the AI doesn't need it to see at night ? or taillights - every smart intelligent AI should know the car ahead is slowing down through its Lidar and act accordingly. Minimally a few running lights to indicate presence ). All one has to do is ensure that all systems on an AI vehicle are maintained and operate 'properly' for the vehicle to be able to join the rest

Since maintenance and proper operation of electrical circuits are suspect for vehicles, and all things electrical and mechanical, at present, can the reliability of an AI vehicle be ensured to have this 100% satisfaction guarantee that things will not go wrong. Is 99% OK?
98%? How's about 80%? Well, if it's a 7 year old car, 60% should be OK. Afterall, things do get old and wear out.

Since I don't know where this is all heading and where society will place controls, I voted No. While being a passenger in a self driving car in good operating condition, can the same be said of the one next to me? Or, maybe my car will be the lemon amongst the others. Who is to tell.
 
  • Like
Likes jack action
  • #153
256bits said:
Even without AI, we could, right now, to reduce fatalities,
Reduce maximum speed limits to such an extent that any collision is just a fender bender.
Put Stop signs at every intersection.
Put a traffic cop at every intersection.
Put a helmet on every person in the car.
... etc.
The problem is that some humans do not respect the traffic laws. If you plug a camera on a rather highly frequented road, it won't take much time until an unpunished infraction is comitted. So even though it's correct to assert that even without AI we could reduce fatalities using such severe methods, an AI would outperform us. And ultimately that is what we care about. Better let humans manipulate weapons they are bad at, or let a very sophisticated AI do that job?
 
  • #154
fluidistic said:
If you plug a camera on a rather highly frequented road, it won't take much time until an unpunished infraction is comitted.
It is not because you committed an infraction that you did something dangerous. The facts will show that most infractions committed (anyway, the kind you refer to), don't cause any accident. If the contrary was true, nobody would do it. People go over the speed limit because you don't have an accident every time you do. People don't drive on the wrong side of the road because a terrible accident is almost guaranteed in such a case.

And that is want I don't understand about this way of thinking: The punishment for such actions is having an accident that results in death or injury. So why punish someone when nothing happens? Nobody in his right mind will willingly do something to hurt himself. But there are risks with every decision. Anyone should be entitled to assess the risks that goes with their actions. And, yes, our actions have an impact on our surroundings. But people around us should adapt their decisions accordingly. As a society, we should educate kids about how to evaluate those risks, and the best method is leading by example. Once they've learned a way, you have to accept it and deal with it, even if it wasn't what you expected of them. At worst, you should consider yourself a bad teacher, not them as bad students.

I don't like having my life guided by the fears of others. You want a car driven by AI because you think it makes better decisions than you? I respect that decision and I might follow your path. You want your neighbors (who you might not even know) to use driverless cars because you don't trust they can drive a car correctly: I think you stepping your bounds. Maybe you're right and one could implicate you in a terrible accident but, from my point of view, your only option is to thank God you have a super smart car that can identify human-driven cars and avoid crossing their path for your protection.

If I'm afraid my neighbor will steal from me, I get an alarm system, I hire a security guard, I buy a dog. What I don't do is asking everyone to wear an ankle bracelet that will give them an electric shock if they go to my house without my consent; You know, because they are thieves everywhere and so why take a chance? If they don't come to my house, they won't get electrocuted, so what's the harm? Do you see the difference between controlling what you do and what others do? In a society where everyone is equal, you cannot act as if you are better than others.

You should have the right to act as you see fit with your surroundings, not to choose how others act around you.

Oh yeah, and if you also think that there is such a thing as an infallible law, especially one made by humans, you will be deceived sooner or latter.
 
  • Like
Likes OCR
  • #155
jack action said:
It is not because you committed an infraction that you did something dangerous. The facts will show that most infractions committed (anyway, the kind you refer to), don't cause any accident.
I agree with you.

jack action said:
If the contrary was true, nobody would do it. People go over the speed limit because you don't have an accident every time you do. People don't drive on the wrong side of the road because a terrible accident is almost guaranteed in such a case.
I do not agree with you. From an AI point of view, people do go over the wrong side of the road and terrible accidents that could be prevented happen. They are rare but not unseen. AI would not do that.
jack action said:
So why punish someone when nothing happens?
Because if the law is not respected, it is useless. You would like to punish a man who opens fire in the streets in a crowded place but doesn't harm anybody by chance, right? That's a pretty good example of why punishing someone who breaks the law but doesn't harm anybody makes sense.

jack action said:
Nobody in his right mind will willingly do something to hurt himself.
The problem is, not everybody is in its right mind. All AI could be in their right mind.

jack action said:
I don't like having my life guided by the fears of others. You want a car driven by AI because you think it makes better decisions than you? I respect that decision and I might follow your path. You want your neighbors (who you might not even know) to use driverless cars because you don't trust they can drive a car correctly: I think you stepping your bounds. Maybe you're right and one could implicate you in a terrible accident but, from my point of view, your only option is to thank God you have a super smart car that can identify human-driven cars and avoid crossing their path for your protection.
No. I want a driving car to drive me because it is safer. The fact that it makes better and quicker decision than me, I do not care about. I want my neighbhor to do the same because they are a threat to the society. Just like with the case of anti-vaxers who refuse to vaccine their children. It is well known that this harm people who cannot be vaccinated or have very low defenses against illnesses. These anti-vaxers are a threat to the society and it is a good thing that the law obligate them to vaccinate their children.
If cars were only a threat to those who drive them, I would have 0 problem for human drivers. But again, they are a threat to innocent people who aren't even in the car, that is the big problem that AI could tackle better than humans.

jack action said:
Do you see the difference between controlling what you do and what others do? In a society where everyone is equal, you cannot act as if you are better than others.
I do not understand the "you are better than others" part. You live in a society, there are rules not to harm others, this is the only "restrictions" that you're obligated to follow. This does not go far into "controlling what others do."
jack action said:
Oh yeah, and if you also think that there is such a thing as an infallible law, especially one made by humans, you will be deceived sooner or latter.
I do not think there are infallible laws. Nor do I think that AI could reduce the number of deadly accidents to 0. I do believe, however, that they could do a better job than humans at reducing car accidents, which is the topic of this thread.
 
  • #156
fluidistic said:
I do not think there are infallible laws. Nor do I think that AI could reduce the number of deadly accidents to 0. I do believe, however, that they could do a better job than humans at reducing car accidents, which is the topic of this thread.

That's not exactly the topic of this thread. The topic of the thread is the present tense, not some hypothetical future possibility.

The distinction is important, because between the present and the future is a long transition.

My wife and I earn our living in consulting. Since the downturn in DoD funding in 2012, most of that consulting work is in legal cases involving injury, and most of those injuries are in vehicle accidents. Two trends stand out to us: 1. Most serious accidents are due to inattentive or impaired drivers. 2. Most seven figure legal cases involve older vehicles: 7-15 years old. So even if self-driving cars solve the problem of inattentive or impaired drivers (for those who use them), they won't be a great benefit to general auto safety until all (or the great majority of) those older vehicles (not self-driving) are off the road.

One issue that comes to mind relative to the long transition is the cost. Compared with the range of costs associated with owning and driving traditional vehicles, what is the range of costs associated with owning and driving self-driving vehicles? Requiring every driver to pay these increased costs is going to significantly reduce the number of people who can drive - it will reward the rich and punish the poor.

There are likely better paths to improved safety that don't force millions of drivers off the road. Since insurance is already a universal requirement, the insurers can use relatively inexpensive technologies (a hundred dollars or so) to record driving habits and raise rates for inattentive, unsafe, and impaired drivers. Higher insurance rates will have the effect of positive behavior modification which will achieve improved safety. Right now the lead times are too long with insurance companies waiting until a driver has tickets and accidents to raise their rates or cancel their policy. These technologies can be made universal in 2-5 years; whereas, you're looking at at least 20 years before self-driving cars would be universal in a place like the US.
 
  • #157
what would happen if there was a malfunction in the self-driving car?
 
  • #158
Dr. Courtney said:
One issue that comes to mind relative to the long transition is the cost.

You should factor in, the price of technology will likely come down big time, tens of thousands of lives saved, what is the value of that, reduction of hundreds of thousands of injured cars and people, again what is the value of that, reduced insurance premiums. And for a while they will keep making dumb cars for those who can't afford self driving cars just as they keep making flip phones. I am guessing many poor have smart phones.
 
  • #159
fluidistic said:
From an AI point of view, people do go over the wrong side of the road and terrible accidents that could be prevented happen. They are rare but not unseen. AI would not do that.
You show humans as full of flaws and AI as some model of perfection. From the manufacturer's point of view:
http://www.bbc.com/news/technology-36783345 said:
The latest crash, near Cardwell, Montana, saw a Model X car swerve to hit wooden rails next to a two-lane road.

"This vehicle was being driven along an undivided mountain road shortly after midnight with autosteer enabled," a spokeswoman told the BBC, referring to autopilot's steering function.

"The data suggests that the driver's hands were not on the steering wheel, as no force was detected on the steering wheel for over two minutes after autosteer was engaged - even a very small amount of force, such as one hand resting on the wheel, will be detected.

"This is contrary to the terms of use that are agreed to when enabling the feature and the notification presented in the instrument cluster each time it is activated.

"As road conditions became increasingly uncertain, the vehicle again alerted the driver to put his hands on the wheel.

"He did not do so, and shortly thereafter the vehicle collided with a post on the edge of the roadway.

"Autosteer... is best suited for highways with a centre divider.

"We specifically advise against its use at high speeds on undivided roads."
Even the manufacturer doesn't trust its vehicles on undivided roads and prefer the full attention of a human being instead. AI is not as magical as you tend to present it.
fluidistic said:
I want my neighbhor to do the same because they are a threat to the society.
fluidistic said:
These anti-vaxers are a threat to the society
They are not threats to our society, they are members of our society. And when members of our society need help (drug or alcohol problems, trust or self-confidence issues, mental problems, etc.), we shouldn't punish them or impose more obligations on them (like owning a driverless vehicle), we should help them. Otherwise, the whole concept of society doesn't make sense, nobody would join. Yes, their actions may cause harm to society, but we should see this as our problems, not them vs us. Self-driving vehicles will not resolve the drinking problem of an alcoholic, which has impact in areas other than driving. Solving (or preventing) the drinking problem, solves every problem.
 
  • #160
jack action said:
You show humans as full of flaws and AI as some model of perfection

Humans are as good as they are going to get, AI will just keep getting better, I don't see how that can't happen.
 
  • #161
Spinnor said:
Humans are as good as they are going to get, AI will just keep getting better, I don't see how that can't happen.
That is a gratuitous and unfounded statement for both humans and AI.

What is the faith of human being: Create AI to replace humans? To what end would humans do that? Especially if we assume humans have no respect for life and their surroundings. Why would they care?
 
  • Like
Likes russ_watters
  • #162
Spinnor said:
You should factor in, the price of technology will likely come down big time, tens of thousands of lives saved, what is the value of that, reduction of hundreds of thousands of injured cars and people, again what is the value of that, reduced insurance premiums. And for a while they will keep making dumb cars for those who can't afford self driving cars just as they keep making flip phones. I am guessing many poor have smart phones.

Extraordinary claims require extraordinary proof. Sometimes the price comes down and systems become more reliable. Sometimes not so much.

"Tens of thousands of lives saved." I've seen too much software and too many bugs, especially in real time systems, to accept this without any real proof. It certainly is a possibility, but there are other possibilities also. And there is zero proof that self-driving vehicles offer more in terms of safety in the long term than other much more affordable technological approaches such as insurers using technology to more closely monitor and raise rates (or cancel policies) on inattentive, inpaired, and unsafe drivers.
 
  • #163
jack action said:
That is a gratuitous and unfounded statement for both humans and AI.

It seems obvious. Humans evolve on a scale of what, 10's of thousands of years and technology evolves every year. And are evolving as better drivers? I doubt it. Well we could spend more time teaching people to drive better but many humans are selfish which leads to many accidents. And on the few occasions where I have had to drive to and in New York city, what a stressful nightmare. I hate it. My brain gets overloaded.
 
Last edited:
  • #164
Dr. Courtney said:
Extraordinary claims require extraordinary proof.

I am sorry but I think my claims are extraordinary.

Dr. Courtney said:
I've seen too much software and too many bugs
We will just have to agree to disagree. It seems every task we pit humans against computers and robots they eventually better us. And when it comes to self driving cars we are obviously in the infancy of this technology. I don't see an obstacle in the near term, say 10 to 20 years to self driving cars bettering me, an average driver (below average at night). There is just too much money to be made with this technology.
 
Last edited:
  • #165
Way too many posts on this topic.
Who or what should be penalized when a self-driving vehicle violates a traffic or safely rule?
 
  • Like
Likes Dr. Courtney
  • #166
Spinnor said:
I don't see an obstacle...
It doesn't look like that Model X did either...
BBC News said:
The latest crash, near Cardwell, Montana, saw a Model X car swerve to hit wooden rails next to a two-lane road.

You made a good pun, though... :biggrin:
 
  • Like
Likes Dr. Courtney
  • #167
symbolipoint said:
Way too many posts on this topic.
BBC News said:
...the vehicle collided with a post...
You made a good pun, too... :biggrin:
 
  • Like
Likes Dr. Courtney
  • #168
Spinnor said:
It seems obvious. Humans evolve on a scale of what, 10's of thousands of years and technology evolves every year.
Humans have been evolving for millions of years, technology is only a few decades old. Everything goes fast once a «key» has been found, but it stabilizes rapidly. Compare an airplane from 1905 with one from 1930, they are nothing alike. Then compare airplanes from today to ones from 25 years ago: Not many differences.
Spinnor said:
It seems every task we pit humans against computers and robots they eventually better us.
Are there so many computers that make better decisions than humans right now? Sure, they are tools build by humans to give them leverage to execute some tasks, like a hammer that multiply the impact force that any human arm can make or an autonomous engine that can produce more work any human "ask" it to do. Computers are only another man-made machine that can calculate very fast when a human "ask" it to do it, nothing more. To my knowledge, nobody thinks an engine is better than a human being.

A human can build wonderful things with powerful machines he conceived, like bulldozers and such. But with a wrong decision, he can destroy in incredible ways as well. With computers, he can make good decisions faster by analyzing already stored decisions or protocols, but he will also be able to make bad decisions faster too. Computers don't make decisions and they have no morals, people who build and use them do.
Spinnor said:
There is just too much money to be made with this technology.
Amen to that.
symbolipoint said:
Way too many posts on this topic.
Who or what should be penalized when a self-driving vehicle violates a traffic or safely rule?
Well just read a few posts before yours and you'll find out that apparently AI is the only human-made machine that will be perfect, won't have bugs and will answer correctly every moral dilemma the human kind could face. Therefore your question is irrelevant. :wink:
 
  • #169
jack action told me:
Well just read a few posts before yours and you'll find out that apparently AI is the only human-made machine that will be perfect, won't have bugs and will answer correctly every moral dilemma the human kind could face. Therefore your question is irrelevant. :wink:
NO. Not irrelevant! You remember the H.A.L. 9000? You really believe human engineering will make such a perfect machine?
 
  • #170
symbolipoint said:
You really believe human engineering will make such a perfect machine?
Well... HAL did. . :wink:
 
  • #171
symbolipoint said:
jack action told me:

NO. Not irrelevant! You remember the H.A.L. 9000? You really believe human engineering will make such a perfect machine?
Tongue in cheek it was that response think I.
 
  • Like
Likes jack action
  • #172
OCR said:
You made a good pun, though... :biggrin:

What is important is accidents per 1000's of miles driven, self driving cars can only improve. Humans are stuck at sucky. Self driving cars in accidents will get a lot of press. The hundreds of people who died in auto accidents over the past week won't.

Went for a delightful bike ride at sunset. Part of the ride took me on a new bike path build next to a new bypass highway. In someones infinite stupidity they erected a wooden post and rail fence instead of a guard rail next to the road. It is quite a joke and reminds me every time I drive the road how bad drivers are. In the 8 mile stretch of the road there must be 50 or more places where humans could not keep their car on the road and took out sections of the fence. Self driving cars are not a matter of if but when.
 
  • #173
jack action said:
Compare an airplane from 1905 with one from 1930, they are nothing alike. Then compare airplanes from today to ones from 25 years ago: Not many differences.

Yes, so self driving cars at first will suck just like the early airplanes did. In time they will be nearly perfected. But I don't think humans are getting any better at driving on average, machines on the other hand will get better. How good do self driving cars have to get to make everyone here feel good about them? Would you rather have some drunk flying down the road towards you or a self driving car. Some here they sound like they would rather take their chances with the drunk.
 
  • #174
Spinnor said:
What is important is accidents per 1000's of miles driven, self driving cars can only improve. Humans are stuck at sucky. Self driving cars in accidents will get a lot of press. The hundreds of people who died in auto accidents over the past week won't.

Went for a delightful bike ride at sunset. Part of the ride took me on a new bike path build next to a new bypass highway. In someones infinite stupidity they erected a wooden post and rail fence instead of a guard rail next to the road. It is quite a joke and reminds me every time I drive the road how bad drivers are. In the 8 mile stretch of the road there must be 50 or more places where humans could not keep their car on the road and took out sections of the fence. Self driving cars are not a matter of if but when.
Spinnor said:
Yes, so self driving cars at first will suck just like the early airplanes did. In time they will be nearly perfected. But I don't think humans are getting any better at driving on average, machines on the other hand will get better. How good do self driving cars have to get to make everyone here feel good about them? Would you rather have some drunk flying down the road towards you or a self driving car. Some here they sound like they would rather take their chances with the drunk.
Too many people are already employed to drive. The driverless vehicles will mean many unemployed truck and taxi drivers.
 
  • #175
Spinnor said:
Yes, so self driving cars at first will suck just like the early airplanes did. In time they will be nearly perfected.

Perhaps, but the question is in the present tense, and I don't see self-driving cars that everyone can afford getting where they need to be for 20-50 years. Lots of poor folks driving cars worth from $500-$2000. Are self-driving cars going to ever be in that price range (or the inflation adjusted equivalent)?

Spinnor said:
But I don't think humans are getting any better at driving on average

The issue in safety is the lower tail of the distribution, not the average. If insurance companies can cut off the lower tail of the distribution or force them to get better through monitoring and higher premiums, there can be significantly fewer accidents.

Spinnor said:
, machines on the other hand will get better. How good do self driving cars have to get to make everyone here feel good about them?

Why not have the machine improvements advising the good drivers and reporting the bad drivers to authorities to get them off the road? Why do the machines need control over the vehicles?

Spinnor said:
Would you rather have some drunk flying down the road towards you or a self driving car. Some here they sound like they would rather take their chances with the drunk.

This would only work if self-driving cars are mandated by law so that all the careless drivers have them. But wait, if the careless drivers obeyed the law, there would be no drunk drivers. You want a new law to solve the problem that people are not obeying an existing law.

When traditional cars are outlawed, only outlaws will have traditional cars.
 
  • Like
Likes symbolipoint

Similar threads

Replies
10
Views
2K
Replies
7
Views
5K
  • General Discussion
Replies
28
Views
5K
  • Science Fiction and Fantasy Media
Replies
2
Views
3K
  • General Engineering
Replies
19
Views
10K
Replies
3
Views
529
  • STEM Career Guidance
Replies
4
Views
1K
Replies
67
Views
13K
Replies
17
Views
3K
Back
Top