Do you feel safer with self-driving cars on the road?

In summary, the conversation discusses the limitations and potential benefits of self-driving cars. Some individuals are skeptical and believe that human drivers are still necessary for safe driving, while others argue that self-driving cars could potentially improve safety on the road. The conversation also touches on the idea of feeling safe versus actually being safe, and the potential for self-driving cars to handle complex situations involving pedestrians. There is also mention of the development and progress of self-driving car technology, and differing opinions on when it will become widely adopted.

Do you feel safer with self-driving cars on the road?

  • Yes

    Votes: 31 41.3%
  • No

    Votes: 37 49.3%
  • No opinion

    Votes: 7 9.3%

  • Total voters
    75
  • #36
jack action said:
Am I the only one who likes driving?
No! I love it too, and I agree with many of the things that you're saying (see ahead), but the main issue is which one is safer, or how we feel ... . I personally feel safer with technology (if the programming is right) [, and I rely on it all the time,] than with humans (including myself). Humans make mistakes more often than machine errors [for which there are also programs to predict and fix] (IMO); they always have [been making mistakes] + they will always will ... . Their (our) behaviour and efficiency is affected by emotions, mood, health factors (including sometimes unforeseeable ones [e.g. such as sudden dizziness, heart attack etc.]), etc.; also there is the big issue of subjective judgement. I am not saying that these are necessarilly bad or negative, but they can get very unsafe many times. That's why basically I voted "Yes" to our poll.
But I see your logic, with which I partially agree, and I liked many of your arguments.
The issue of freedom, initiative and control bothers (concerns) me too, besides the fear issue that you mention in your other post ...
Thus regarding
jack action said:
I know that I seem to become less and less part of «most of us».
Not at all! Don't see it that way. (I don't.) The current poll is well to your favour anyway, as we speak! ...
jack action said:
When I was a kid, I was one all the time and couldn't grow up fast enough to be in the driver seat.
I don't dissagree. Driving is creative. But what I meant was that everyday we have to rely on many types of machines (cars, buses, trains, boats, ships, planes etc.) for our transportation and safety etc., and on other humans too, that we do not have contol over. In other words we can't control everything! Driving is the least.
And as far as piloting, although I would love too to become a pilot one day, right now I am not, and thus when I fly I am just a
jack action said:
piece of meat that gets to be moved around
(similar in other types of public transportation [buses, trains etc.], especially if you live in a metropolitan area)
jack action said:
Although I don't mind people having driverless vehicles if they want one. I just wish that we won't reach a point where that it is our only choice.
Well put! I agree. That is my fear and concern as well (despite my 'possible future projected post' earlier above [#24]). But I doubt that this will ever happen exactly that way (just like e.g. with cell phones - you can avoid having one if you do not wish to, while most people have ...). However, nobody can foresee exactly the future. Only the people that create it can have a better idea! ...

But in any case the main issue here is about safety and our poll (what we think) ...
 
Last edited:
  • Like
Likes jack action
Computer science news on Phys.org
  • #37
So to add another thing to the discussion. I voted "no opinion", mainly because how the question was phrased - in present tense. At the current time, I do not think there are enough self-driving cars to noticably affect safety. In the future, I would assume that they are not allowed on the road en masse unless they work at least as well as the average human driver (low bar, I know), which I think there is a reasonable chance of achieving. This is a matter of regulation - just as it is a matterof regulation which humans we allow to drive on the roads. To be honest, I think any self driving car model would need to go through significantly harder testing than the drivers test you have to do to get your licence.
 
  • Like
Likes Stavros Kiri
  • #38
I voted yes because as there are more self-driving cars (which seem to be statistically safer then the average human driver), then driving should be safer.
However, I am not interested in using one right now.

However, as @Greg Bernhardt mentioned, the first few versions of self-driving cars would give me mixed feelings because they want you to (as I understand it) to be ready to take over when some weirdness, which the computer can't handle, arises. This would make me want to be aware of all the usual driving issues which would take away from what I see as the primary benefit of having a self-driving car, which is not having to pay attention to the driving issues and to take a nap, read something, or whatever (similar to being on an airplane or train).
These different human tendencies will be in conflict until later versions make it less relevant.

A real benefit I see of self-driving cars would be a much greater awareness of things in blind spots to be avoided.

I also like driving (unless I'm sleepy or want to do something else).
I prefer a stick shift which forces you to be more involved with the functioning of the car. More fun.
This might be lost, but presumably there would be a manual version available for use when desired.

Another issue that we discussed at the Portland meet-up a few weeks ago was what if you wanted to go faster than the speed limit for some reason (almost everyone does this on I-5 in my area).
Would the car let you?
If it did, would it modify its behavior if it saw a cop car ahead (like a person would)?
(Why would the cops let this info out?)
(Who would get the ticket, presumably the human)?

Turns out, my son already has an app on his phone that tells you (fairly accurately) when you are coming up on cops on the road.
It uses crowd sourced information from other drivers. We road tested it going to the eclipse.
However, in my opinion, it takes too much attention for a properly involved driver to use (unless maybe if you can just talk to it).
 
  • #39
UWouldKnow said:
...the only crashes I've been in could have been avoided with a self driving car
While i expect self driving cars to be safer, we do need to be careful about flawed data analysis when it comes to their safety. The types of accidents people and self driving cars get into are different in at least some cases so it is possible to say 100% of human caused accidents would have been avoided while still having no idea whether the self driving car is safer because we have no idea what type of accidents the self driving car is susceptible to until we have data on it. The fatal Tesla accident with the truck is such an example.

Similarly, it's nice the Google cars have apparently been safe, but does their experience really translate? City driving causes a lot of minor accidents but almost no deaths because the speeds are so low. How does a Google car do on a highway at 70mph when suddenly losing a lane marker? Unfortunately, the only way to find out what types of accidents they are susceptible to is for them to get into tough situations and potentially get in accidents.
 
  • #40
russ_watters said:
Unfortunately, the only way to find out what types of accidents they are susceptible to is for them to get into tough situations and potentially get in accidents.
Simulations (Physical or virtual)?
 
  • #41
WWGD said:
Simulations (Physical or virtual)?
Yes, I'm sure they are doing extensive simulations. But the problem is that you only simulate what you know you should simulate. The types of problems I am most worried about are where the car doesn't know it is in trouble and as a result, there is nothing to simulate (or the simulation produces no result). The Tesla crash I mentioned above is such an incident. My understanding is that the car never recognized there was a hazard, which is why it never took action, much less notified the driver that it was unable to deal with the situation. If the driver had taken control and avoided the accident, there likely would have been nothing to flag Tesla that the software had failed to recognize a hazard and that they should work on fixing the software glitch (my understanding is that the cars are constantly collecting data and reporting it back to Tesla to use in such simulations). We can be sure by now they simulated it, because someone died and therefore they had to investigate. But heck, I bet the first few times the engineers ran simulations of the accident after the fact, the computer reported to them that no accident happened. It was a literal and figurative blind spot.

Now, these features are still in development and I previously criticized Tesla for using their customers as beta-test subjects of something that could kill them. Hopefully by now better controls are in place to avoid that, but I'm still not convinced that this accident could have been avoided by the driver. Most of these types of features have warnings for the driver if the computer loses control and legalese in their owners manuals to protect the car company by saying the computer never has final control, but that legalese won't protect the driver. The driver in the Tesla crash would have had to think about whether or not the computer saw the truck and decide accurately an in time that it didn't and what action to take. He very well might have been thinking "oh, there's no way it doesn't see this truck" until it was past the point of being able to avoid it. Ultimately though, we want true driverless cars, so that's a little out of bounds.

What's needed is that the engineers designing these things have had enough time and money to put enough effort into designing the simulations that they've thought of every realistic hazard to throw at the car. And since these systems are almost certainly all proprietary, that's a lot of different companies doing an enormous number of simulations.
 
  • #43
All is well until glitches happen. Systems, no matter how well-built, will always have glitches, bugs, or whatever you call it — those glitches can be the system's fault or other external factors.
 
  • Like
Likes ISamson
  • #44
xblaze said:
All is well until glitches happen. Systems, no matter how well-built, will always have glitches, bugs, or whatever you call it — those glitches can be the system's fault or other external factors.
This is true for both proposed modes of vehicle operation in this thread. The question is which one will have the least amount and lowest severity of glitches.
 
  • Like
Likes NFuller
  • #45
WWGD said:
But I think the standard is: is it safer when humans do it? No method will be completely safe.

But I think there are some key features of a drunk person that can be programmed into a computer.
 
  • #46
Orodruin said:
This is true for both proposed modes of vehicle operation in this thread. The question is which one will have the least amount and lowest severity of glitches.
With that in mind, we could just hope for the best. There are cons to each operations as there are pros. Humans err, mostly, because of acting on their emotions; machines, such as the topic at hand, on the way they're built or programmed.

Here's an additional reading, a company blog I came across with when I was looking for companies related to robotics: http://www.powerjackmotion.com/make-way-smart-robots/ (It's Time to Make Way for Smart Robots in Your Industry!). One of the topics there is about self-driving, but the discussion is quite introductory, and you have to dig deeper in order to know more about the subject matter.
 
  • #47
A mother and her 11 month old child were t-boned and killed at an intersection a couple blocks from me yesterday. Bring on autonomous cars ASAP!
 
  • Like
Likes russ_watters and Stavros Kiri
  • #48
Greg Bernhardt said:
A mother and her 11 month old child were t-boned and killed at an intersection a couple blocks from me yesterday. Bring on autonomous cars ASAP!
It would have been better if their deaths were the result of a driverless machine? Less guilt maybe?
 
  • Like
Likes 256bits
  • #49
jack action said:
It would have been better if their deaths were the result of a driverless machine? Less guilt maybe?
It was a drunk driver going 70 through a red in a 35mph road. Would a machine allow that?
 
  • #50
jack action said:
It would have been better if their deaths were the result of a driverless machine? Less guilt maybe?
The obvious point being made was that it would not have happened if the car was autonomous.
 
  • #51
Orodruin said:
The obvious point being made was that it would not have happened if the car was autonomous.
My point is that people will still die in horrible accidents, autonomous vehicle or not.
 
  • Like
Likes 256bits
  • #52
jack action said:
My point is that people will still die in horrible accidents, autonomous vehicle or not.
Which is a moot point unless you consider the rate at which it happens. Women still die in childbirth today. Does that mean that the medical care in relation to childbirth today is on the same level as 500 years ago?
 
  • Like
Likes StoneTemplePython
  • #53
jack action said:
It would have been better if their deaths were the result of a driverless machine? Less guilt maybe?

I would claim it would be better, since the likelihood of such events would be much less for a decent autonomous vehicle than for a human driven one.
In the case of an autonomous vehicle, such an event would not be a popularized indicator (through a single evocative story that people can relate to) of a much larger body of similar occurrences (which fade into a reduced concern about statistical facts, psychologically speaking) as it would be in the case of the human driver.

Sure it is a terrible thing when such accidents occur, and in that sense they are equivalent.
However, in a larger (more statistical) view of things, they are not equivalent.

Its a trees and forest point of view thing. Which point of view do you use when making some value judgement?
You can't really use both (perform an analysis based on both points of view and assume they will both lead to the same conclusion).
One's a view at an individual level, the other is a more global point of view.
 
  • Like
Likes 256bits
  • #54
@Greg Bernhardt , @Orodruin , @BillTre :

Greg's post evoked in me a feeling that I expressed before on this forum and I just realized it was in this particular thread, 3 weeks ago, in post #29.

With the excitement Greg had while presenting is opinion, it's sounds to me like the best way to make the best decision is to not make one at all and count on a more «knowledgeable» someone or, in this case, something.

I'm not even debating the fact that machines can make better decisions in a statistical sense. For the sake of argument, let's accept they do.

What are the consequences of a human being not making any decision? What is the point of living? Let's even consider the case of the drunk driver who had an accident. He made a bad decision, true. Why? What brought this person to that point? Is removing this person's entitlement to make decisions helping him or her? Is this person just suppose to say: «I don't have to do anything, anyway I'm not good enough, let the machine do it»?

I know the subject of this thread is self-driving cars, but it seems nobody wants to make decisions anymore. A way to disempower oneself and others. Sure, it looks nice when you look at people making bad decisions. But how are we suppose to learn how to differentiate good from bad, if we make less and less decisions? How will we know if the machine made the right decision, if we don't even develop our own judgement? Are we going to doubt ourselves all the time? Should AI decide who will run the country? After all, it will probably make a better decision than the average voter, right? When will we end this journey where we say: «People shouldn't be allowed to do that»?

It's really the «Bring on autonomous cars ASAP!» comment that bothers me. I don't believe it is the Holy Grail. I'm not even sure the problem it supposes to solve is that big of a problem. Most people don't make bad decisions when behind the wheel. It is even far from being a major cause of death for the human race. Statistics, right? It works both ways.

I like the concept of machines assisting humans, but I don't like when humans are removed from the decision process. It is a very important one, not only for the action of the moment, but for the development of the individual as well.

So to relate all of this to this thread - Do you feel safer with self-driving cars on the road? - I'm more afraid of people loosing their ability to make good decisions in general than the death toll caused by car accidents due to bad decisions.
 
  • #55
jack action said:
So to relate all of this to this thread - Do you feel safer with self-driving cars on the road? - I'm more afraid of people loosing their ability to make good decisions in general than the death toll caused by car accidents due to bad decisions.
So when I am out late drinking, should I call the uber or drive home? The good decision is to call the uber or start up you own autonomous car.

Most people don't make bad decisions when behind the wheel. It is even far from being a major cause of death for the human race. Statistics, right? It works both ways.

Agree to disagree, I think people in general are terrible drivers. Each time I drive, I see people driving crazy and dangerously. I see people blowing through red lights every single day.
 
  • Like
Likes Orodruin
  • #56
If I am not mistaken - even with the one Tesla fatality the death rate was like 1 in 110M mi driven, vs 90M Mi for human drivers, as an early product fault, reliability typically increases 10 to 20 fold after the failures are identified and accounted for, I would not be surprised to see the final rate be better than a 90% reduction. Given this, IMO, they are already safer then human drivers, by a considerable margin.

Not to mention - the technology can be applied most heavily to the highest risk drivers, teens that do not care about driving themselves, fatigued drivers, drunk and the elderly.

The human intervention model is a fools errand to make the public feel better, if you are not engaged in the act of driving there is very little likelihood you can instantly and effectively take over control and be aware of the entire situation - like when you boss calls oyu out in a meeting for not paying attention, you're scrwed. That is is just not human nature.

Then beyond the full autonomous is the amount of experience the vehicles have with basic augmentation - the amount of situational learning and remarkable (to me) few number of failures needs to be considered. The number of sensors and types and quantity of data being used is dramatically more than a human uses. The amount of learned experience is cumulative, and hard coded in. Humans only learn what they specifically have been taught, we do not get the collective experience of the other drivers.

The vast majority of accidents are not caused by an unusual situation - they are caused by human fallibility; inattentiveness, fatigue, anger, arrogance (thinking you are better then you are), etc. These are exactly the same factors that cause general safety issues, to me, it is about removing the least reliable element.

So clearly I was a Yes.

The more interesting debate - discussion is how to deal with the disruption to the general economy.
 
  • Like
Likes BillTre
  • #57
Greg Bernhardt said:
Agree to disagree, I think people in general are terrible drivers. Each time I drive, I see people driving crazy and dangerously. I see people blowing through red lights every single day.
I very much agree with this. To the contrary of jack's comment, I believe all humans take bad decisions when driving and I will go so far as saying that I am sure I do them myself. Not to the extent of something possibly life-threatening every time I am behind the wheel, but they will definitely happen and if they happen at the wrong moment they may cost me or someone else their life. I do think that I am a good enough driver for the expectation value of the number of dead due to my driving being significantly smaller than one, but if you have enough people like me driving - statistics will get someone in the end and that someone's life will be ruined or lost.

In fact, I do not see any reason except vanity why everyone should feel it a "right" to drive. In cities where public transport is well developed, there is already very little need for every person to be able to drive. When I take the commuter train tomorrow morning, I will be one among a thousand people on that train out of which 999 will not be driving it. What does one more matter in that respect?
 
  • #58
Orodruin said:
I very much agree with this. To the contrary of jack's comment, I believe all humans take bad decisions when driving and I will go so far as saying that I am sure I do them myself. Not to the extent of something possibly life-threatening every time I am behind the wheel, but they will definitely happen and if they happen at the wrong moment they may cost me or someone else their life.
Sure I am biased, but I think I'm a good driver, but certainly I can think of at least a handful of mistakes I've made in the past that if conditions were a little different they could have caused a significant accident.
 
  • Like
Likes Stavros Kiri
  • #59
Teleportation is the answer.
Just set the co-ordinates and then arrive where you want to be in a few seconds.
I voted in favor of automated transport systems.
 
  • Like
Likes Stavros Kiri
  • #60
Greg Bernhardt said:
I think people in general are terrible drivers.
This a very pessimistic view based on irrational fear. I'm pretty sure that most people that take the road on a single day arrive at their destination without any bad event whatsoever. Something in the order of 99.99...% and more. To me that most likely means that - statistically - people make good decisions regardless of one's opinion on «what could've happen». The reality is that it doesn't.

I often hear people say «He was lucky, it could have been worst.» But to me, it seems that the reality is more often «He was unlucky, it usually doesn't end this way.» Looking at life that way, gives you a more optimistic (realistic?) view of the world.
Orodruin said:
I believe all humans take bad decisions when driving and I will go so far as saying that I am sure I do them myself.
Again, this little faith in human kind is what fascinates me. It's like if being a human being was some sort of disease that needed to be cured.

You are also introducing that self-doubt I was talking about. Regardless of what seem to be a good driving record for yourself, you seem to still prefer not making decisions, leaving it to a train operator or AI. What else do you think others can do better than you? Where will you stop? Because I can assure you that there is always someone that can do things better than you, probably even in your field of expertise.

It might sounds cliché, but the only way you know there are good decisions, it's because there are bad decisions. There is no way around it. Wanting to eradicate the world of bad decisions IS a bad decision.

Again, I'm OK with making better machines. But do it for the right reasons. Do it for the fun of it, not to save the human race from itself.
 
  • Like
Likes Stavros Kiri
  • #61
jack action said:
Something in the order of 99.99...% and more.

More then 30,000 deaths and probably hundreds of thousands of injuries last year.

700px-Motor_vehicle_deaths_in_the_US.svg.png
That is a lot of pain and suffering.
 
  • Like
Likes ISamson, russ_watters, Stavros Kiri and 1 other person
  • #62
jack action said:
To me that most likely means that - statistically - people make good decisions regardless of one's opinion on «what could've happen». The reality is that it doesn't.
That is an illusion based on most potentially fatal mistakes actually not being fatal. That you arrive safely is correlated, but not equivalent to you driving safely and taking good decisions. For example, failing to pay proper attention when driving across a seldomly crossed zebra crossing. This is a mistake that is going to go by completely unnoticed until it doesn’t. It does not make the mistake any less of a mistake.

jack action said:
You are also introducing that self-doubt I was talking about. Regardless of what seem to be a good driving record for yourself, you seem to still prefer not making decisions, leaving it to a train operator or AI. What else do you think others can do better than you? Where will you stop? Because I can assure you that there is always someone that can do things better than you, probably even in your field of expertise.
What you describe I would describe as severe hubris. There is also a clear difference in drawing lines. It makes sense for me to continue doing what I do because I add (at least in some part) to the research in my field. Are there people better than me? Sure, but they cannot do everything themselves. This is clearly not the case with autonomous vehicles. Or with cars in general - you do not need more than one driver (or zero in the case of the autonomous car). More drivers will not make the car safer or accomplish its task better.

When it comes to governing, I believe the last few years have clearly shown that a main flaw in democracy is that people are easy to influence with feelings and emotions based on false or invented facts. If sufficiently advanced and benevolent, I would be prepared to handle government over to an AI.

You are talking about people handing over their decisions to a machine, removing their own decisions, but the truth is that many people already hand many of their decisions over to others. The only difference being that these others happen to be other humans (most of the time).

jack action said:
It might sounds cliché, but the only way you know there are good decisions, it's because there are bad decisions.
I think this is nonsense to be honest. The way that you weight decisions is by predicting and weighting outcomes against each other. You do not need to do it to know that hitting on 20 when the dealer shows a 6 is a bad decision.

jack action said:
But do it for the right reasons.
I would argue that saving human lives is a good reason. In particular if it only comes at the expense of humans taking monotonous decisions prone to error. The car is not deciding where you should go. It is removing a monotonous task that most peoples’ brains struggle with.
 
  • Like
Likes russ_watters and Stavros Kiri
  • #63
Elevators / lifts, generally are considered to be better than walking up a ten story building.
 
  • #64
Greg Bernhardt said:
It was a drunk driver going 70 through a red in a 35mph road. Would a machine allow that?
Only if it malfunctioned. Could it happen? Yes (and it will happen), but people "malfunction" a lot more often! ... [if not every day, all the time - some people, at least]
Stop sign and red light violations are almost more often than the non-violations! Then there is speeding and drunk driving ...
[I had some good video links, showing statistics - if I can find them]

Machines normally wouldn't do any of that. Codes are explicit.
[That alone drops down the chances for accidents at least by 90%, I think ...]
 
  • Like
Likes Orodruin, ISamson and rootone
  • #65
Code malfunctions can be corrected, drunk drivers not so easy,
 
  • Like
Likes ISamson and Stavros Kiri
  • #66
jack action said:
This a very pessimistic view based on irrational fear. I'm pretty sure that most people that take the road on a single day arrive at their destination without any bad event whatsoever. Something in the order of 99.99...% and more. To me that most likely means that - statistically - people make good decisions regardless of one's opinion on «what could've happen». The reality is that it doesn't.
You have a good point there, and in that whole post of yours! I think it's a very wise post, although I vote for self-driving cars, for various reasons.
But I think one has to also see the statistics for accidents that do happen (on a day) versus the number of mistakes and violations (huge!, on the said day) that did or did not cause an accident. You in fact want to eliminate all that, regardless of the 99.99...% that you correctly perhaps refer to. And machines almost do that.
 
  • #67
jack action said:
@Greg Bernhardt , @Orodruin , @BillTre :

Greg's post evoked in me a feeling that I expressed before on this forum and I just realized it was in this particular thread, 3 weeks ago, in post #29.

What are the consequences of a human being not making any decision? What is the point of living?

I know the subject of this thread is self-driving cars, but it seems nobody wants to make decisions anymore. A way to disempower oneself and others. Sure, it looks nice when you look at people making bad decisions. But how are we suppose to learn how to differentiate good from bad, if we make less and less decisions? How will we know if the machine made the right decision, if we don't even develop our own judgement? Are we going to doubt ourselves all the time? Should AI decide who will run the country? After all, it will probably make a better decision than the average voter, right? When will we end this journey where we say: «People shouldn't be allowed to do that»?
I really think you are overthinking this. Mostly what we want machines to do for us are the things that are too hard or boring or dangerous. It doesn't get in the way of us living, it frees us to do the living we really want to do.
It's really the «Bring on autonomous cars ASAP!» comment that bothers me. I don't believe it is the Holy Grail. I'm not even sure the problem it supposes to solve is that big of a problem. Most people don't make bad decisions when behind the wheel. It is even far from being a major cause of death for the human race. Statistics, right?
No, actually it really is a very significant cause of death for humans in developed countries like the US. It's higher than 1% overall and depending on your demographic, can be very much higher than 1%.
https://www.cdc.gov/injury/wisqars/overview/key_data.html

Cancer and heart disease are far and away the most significant risks of death, but since they almost exclusively happen to old people, for every other age group except newborns, "unintentional injury" is the leading cause of death, with car accidents making up the largest fraction of that (from above link).
https://www.cdc.gov/injury/images/lc-charts/leading_causes_of_death_age_group_2015_1050w740h.gif

However, narrowly there is a potential salient point here:
I'm more afraid of people loosing their ability to make good decisions in general than the death toll caused by car accidents due to bad decisions.
This is indeed a potential downside and does happen due to too much reliance on automation. Many plane crashes (example: Air France 447) happen because of over-reliance on automation causing pilots to lose their skills or mis-perceive what the computers are telling them. At the same time, one can imagine the increase in self-driving cars to correlate to an increase in alcohol abuse and alcoholism, since removing the need to drive home removes one incentive to behave responsibly. These unintended consequences may be hard to identify, but that's largely because they are much less common/significant than the primary consequence (the increased safety). So whereas today automation failures cause a much more significant fraction of plane crashes today than they used to, overall there are far fewer plane crashes and fewer resulting deaths. The same positive trade-off will almost certainly be true of self-driving cars.
I'm pretty sure that most people that take the road on a single day arrive at their destination without any bad event whatsoever. Something in the order of 99.99...% and more. To me that most likely means that - statistically - people make good decisions regardless of one's opinion on «what could've happen».
This is an improper way to look at the statistics: you are ignoring how often you play the game. Your chances of winning the lottery might be 1 in a million, but if you buy half a million lottery tickets, your chances of winning are 50%. In other words, your individual odds of dying on any particular car ride are very small, but you take a lot of car rides, so your annual or lifetime risk is fairly significant.
It might sounds cliché, but the only way you know there are good decisions, it's because there are bad decisions.
Well that's just silly. You don't need to be a genius to know that running a red light is dumb/dangerous and you don't need to test it either. I don't need to actually [chooses random object in field of view] pull a curtain rod off my wall and stab myself with it to know that would be a dumb thing to do. Humans are plenty smart enough to weigh decisions they have never taken.
 
Last edited:
  • Like
Likes Stavros Kiri
  • #68
Orodruin said:
That is an illusion based on most potentially fatal mistakes actually not being fatal. That you arrive safely is correlated, but not equivalent to you driving safely and taking good decisions. For example, failing to pay proper attention when driving across a seldomly crossed zebra crossing. This is a mistake that is going to go by completely unnoticed until it doesn’t. It does not make the mistake any less of a mistake.
I live in a city where there is a zoo. About 50 years ago, a lion escaped and spread terror to the point where it was shot to death. There is still a zoo today, they still have lions. Is it a mistake on my part to not check for lions on my porch before getting out of my house?

Making decisions is all about probability and, yes, not paying attention when driving across a seldomly crossed zebra crossing is NOT a mistake, from my point of view. The proof lies in the results. But the chances are always there and the hit is inevitable given time. That is why I say about that accident «That person was unlucky, it usually doesn't happen» and not to the thousands of other people who did not have an accident «You were lucky, you could've hit a zebra!» I can assure you that this is how AI would make its decisions as well.
Orodruin said:
When it comes to governing, I believe the last few years have clearly shown that a main flaw in democracy is that people are easy to influence with feelings and emotions based on false or invented facts. If sufficiently advanced and benevolent, I would be prepared to handle government over to an AI.
That is scary. The solution to that problem is to raise people that can make better decisions, not to replace them with machines. Is your solution to people not well educated, replacing them with machines that have better knowledge? Humans are NOT a lost cause. Otherwise there is no point keeping humans alive.
Orodruin said:
You are talking about people handing over their decisions to a machine, removing their own decisions, but the truth is that many people already hand many of their decisions over to others. The only difference being that these others happen to be other humans (most of the time).
That is my point, we are on a dangerous path. One where the common man is seen as an unfit animal, unable to care for itself. I don't believe that. I always felt that we should go towards having more people being able to make decisions in all their life aspects and thus contributing to the society in general, not just waiting for someone (or something) else to decide. That is what democracy is.
Orodruin said:
The way that you weight decisions is by predicting and weighting outcomes against each other. You do not need to do it to know that hitting on 20 when the dealer shows a 6 is a bad decision.
Making decisions is often way more complicated than that. Ask the people of Florida if they should evacuate or not when an hurricane is announced. Not an easy decision to make. How many times are you going to evacuate the entire state «for nothing» before you won't? And when you won't do it, it may be the time you should have. Welcome to life. Can AI do better? I don't think so. The way nature does it is by diversity: Some go, some stay, at least one group survives. The «good» decision is unpredictable.
Orodruin said:
I would argue that saving human lives is a good reason.
No lives are ever saved. The best you can do is extend one. On the greater scheme of things, I still fail to see what improvement it does to a form of life, human race or any other. I guess it is these «feelings and emotions» that you were talking about that influences you. I wonder if you would appreciate a machine making decisions for you with that cold and objective attitude? After all, I'm a human being and I already have those thoughts. You better hope I won't be the programmer behind the next generation of AI.
russ_watters said:
Mostly what we want machines to do for us are the things that are too hard or boring or dangerous. It doesn't get in the way of us living, it frees us to do the living we really want to do.
Oh! I love this one! What is it we really want to do? What if what I really want to do is driving? Will I be allowed or will I be forbidden to do it because it is considered too dangerous by many, too afraid I will kill them? Will I be considered abnormal for thinking driving is fun and not boring? What I'm I supposed to really want to do?

Have you noticed that the more we find things boring, the more we find the things we replaced them with, boring as well?
 
  • #69
russ_watters said:
But heck, I bet the first few times the engineers ran simulations of the accident after the fact, the computer reported to them that no accident happened.

Star Trek - Season 4 Ep 5 "Remember Me"

Tesla, what was that noise?
Explosive decomposition of passenger compartment and passenger.
Cause?
Design Flaw: there appears to be no structure above the mirrors.
 
  • Like
Likes Stavros Kiri and russ_watters
  • #70
jack action said:
Oh! I love this one! What is it we really want to do? What if what I really want to do is driving?
Is that true or are you just asking for the sake of argument? We're not discussing an abstract hypothetical here, we're discussing a likely near-term reality about a daily task people do that most would rather not. Really. As @Greg Bernhardt said earlier, I can't wait for the time when I don't have to do the mind-numbing task of driving to and from work in traffic for an hour+ a day. I'd much rather be taking a nap, watching TV, reading a book, using PF, etc.
Will I be allowed or will I be forbidden to do it because it is considered too dangerous by many, too afraid I will kill them?
1. There is a wide gulf between "possible" and "mandatory". For a very long time - perhaps until after we're both dead and perhaps forever - self-driving will be a feature we can choose to buy and use or not. Like a washing machine. I would bet money that most who can afford it will choose to buy it (and get rewarded with lower insurance rates). So I don't think the scenario you present (of self-driving being mandatory) is a reflection of any potential reality that is on the table.
2. I like driving too. Sometimes. In certain contexts. So perhaps instead of commuting to work (boring, stressful), I'll go to a test track on weekends and scratch the itch that way.
Will I be considered abnormal for thinking driving is fun and not boring? What I'm I supposed to really want to do?
That's meaningless and irrelevant here. Mature, free thinking adults don't have to ask if what they want to do is considered "abnormal". The question is simply what do you and most people want.
Have you noticed that the more we find things boring, the more we find the things we replaced them with, boring as well?
No, I haven't. That's just silly. Why would I choose to do something more boring if there was something less boring and more enjoyable I could do? I don't go thinking to myself: "Hmm...should a watch a movie right now or not turn on the TV and stare at the blank screen?"
 
  • Like
Likes Orodruin

Similar threads

Replies
10
Views
2K
Replies
7
Views
5K
  • General Discussion
Replies
28
Views
5K
  • Science Fiction and Fantasy Media
Replies
2
Views
3K
  • General Engineering
Replies
19
Views
10K
Replies
3
Views
530
  • STEM Career Guidance
Replies
4
Views
1K
Replies
67
Views
13K
Replies
17
Views
3K
Back
Top