Do you feel safer with self-driving cars on the road?

Click For Summary

Discussion Overview

The discussion centers around the feelings of safety regarding self-driving cars on the road. Participants explore various aspects of this topic, including technological limitations, human driving behavior, and the implications of self-driving technology on overall road safety.

Discussion Character

  • Debate/contested
  • Exploratory
  • Technical explanation
  • Conceptual clarification

Main Points Raised

  • Some participants express skepticism about self-driving cars, citing their limitations in anticipating complex driving situations that human drivers might handle better.
  • Others argue that self-driving cars could potentially reduce accidents caused by human error, such as driving under the influence or distracted driving.
  • A few participants note that the current proportion of self-driving cars on the road is too small to significantly impact overall safety.
  • Concerns are raised about the reliability of sensors in adverse conditions and the potential for mechanical failures requiring manual intervention.
  • Some participants highlight the importance of human intuition and experience in navigating complex driving scenarios, suggesting that self-driving technology may not yet be able to replicate this effectively.
  • There is a discussion about the psychological aspect of feeling safer versus actual safety, with some participants emphasizing that feelings of safety can be influenced by individual perceptions and logical reasoning.
  • A participant mentions the potential for machine learning to improve self-driving technology, suggesting that computers might eventually outperform humans in driving tasks.
  • Some participants reflect on the evolution of vehicle safety features over time, comparing them to the anticipated advancements in self-driving technology.

Areas of Agreement / Disagreement

Participants generally express a mix of opinions, with no clear consensus on whether self-driving cars will make people feel safer. Some agree on the potential benefits of self-driving technology, while others remain skeptical about its current capabilities and the implications for road safety.

Contextual Notes

Participants acknowledge various limitations, including the dependency on technology reliability, the unpredictability of human behavior, and the challenges of programming self-driving systems to handle complex driving environments.

Do you feel safer with self-driving cars on the road?

  • Yes

    Votes: 31 41.3%
  • No

    Votes: 37 49.3%
  • No opinion

    Votes: 7 9.3%

  • Total voters
    75
  • #241
FactChecker said:
I should have known that Canada would take the issue of self driving cars on snow and ice very seriously:

Not much there to consider if the self driving cars can function in winter.
I think they have a goodly lot more testing to do for a confidence level to be impressive.
A 3d map? - complete updates minute to minute, hour to hour, day to day, month to month - the landscape can change.
The landscape maps most likely do not have mountains of pushed and dumped snow 15 feet high either.
Or the 2 to 4 foot high plowed bank by the side of the road.
Does, or will, a self driving car know how to rock itself out of a parking spot after sitting there overnight - the steering wheel and pedal less ones may have a problem. Just yesterday
Will it have to clean the snow of the hood, headlights, tail lights, or will that have to be the passengers responsibility.
Windshield washer fluid check.
Wiper blade freeze up - may not need these two, but passengers do like to gawk at all the other drivers.
I am sure? that the shoveling and pushing will be a thing of the past after a foot snowfall with a self driving.
 
Computer science news on Phys.org
  • #242
Will your self-driving car decide to kill you if its algorithms are forced to chose between, say, driving off a cliff or into a crowd of people?
 
  • #243
BWV said:
Will your self-driving car decide to kill you if its algorithms are forced to chose between, say, driving off a cliff or into a crowd of people?
Depends on the program
 
  • #244
Stavros Kiri said:
Depends on the program
The program called "Thelma & Louise" is especially bad that way.
 
  • Like
Likes   Reactions: Stavros Kiri
  • #245
BWV said:
Will your self-driving car decide to kill you if its algorithms are forced to chose between, say, driving off a cliff or into a crowd of people?
The best way to answer this question is answering this one:

While driving a car, would you decide to kill yourself if you are forced to chose between, say, driving off a cliff or into a crowd of people?
 
  • Like
Likes   Reactions: Stavros Kiri
  • #246
jack action said:
The best way to answer this question is answering this one:

While driving a car, would you decide to kill yourself if you are forced to chose between, say, driving off a cliff or into a crowd of people?

The law does not expect you to sacrifice yourself to save others, but it might require your car to

http://science.sciencemag.org/content/352/6293/1573
 
  • Like
Likes   Reactions: Stavros Kiri
  • #247
FactChecker said:
The program called "Thelma & Louise" is especially bad that way.
Here is another question that might question the option for manual override, that I spoke in favour earlier:
Will a human be allowed to manual override an autonomous vehicle to "their death"?

[As a first response, I can't see why not. (Suicide generally may be immoral but not illegal ...)
But what about ... to other people's death?]
 
  • #248
BWV said:
The law does not expect you to sacrifice yourself to save others, but it might require your car to

http://science.sciencemag.org/content/352/6293/1573
I don't think it will ever happen. The machine is there to make the decision for the driver (now only a passenger), thus if a human driver's reaction is deemed acceptable, the same decision made by AI 'working' for the passenger should be acceptable too.

It would be terrible if human lives were just reduced to probabilities and statistics, because then humans just become livestock. And that is why (from the abstract of your link) no ones want to be in a driverless vehicle that has not its passengers as a number one priority.

Imagine putting your child in a school bus. Could you accept the bus driver sacrificing your child because he might save other (more valuable) people? Now replace the school bus and its driver by a driverless bus. The moral dilemma faints quickly.
 
  • #250
jack action said:
I don't think it will ever happen. The machine is there to make the decision for the driver (now only a passenger), thus if a human driver's reaction is deemed acceptable, the same decision made by AI 'working' for the passenger should be acceptable too.

It would be terrible if human lives were just reduced to probabilities and statistics, because then humans just become livestock. And that is why (from the abstract of your link) no ones want to be in a driverless vehicle that has not its passengers as a number one priority.

Imagine putting your child in a school bus. Could you accept the bus driver sacrificing your child because he might save other (more valuable) people? Now replace the school bus and its driver by a driverless bus. The moral dilemma faints quickly.
The decision would be the choice of the driver, and how a decision made in seconds can play out.

No driver can, and neither could a self driving car, do the necessary calculations in the short time allocated.
Otherwise, there should be enough time to avoid and/or stop harming no one.

Then again, how often do these scenarios ever play out anyways.
But owing to the chance that it could, most drivers would probably try to avoid bus shelters, babies in carriages, bicycles, driving off a cliff, wedding parties, ramming into a building, or whatever. Accidents happen so split second that there is just not enough time to second guess maneuvers. In the end, the casualties may be the occupants of the car, or bystanders, some of both, or neither.

At one time I used to think, ah, that's a moral dilemma to sort out, on how to make the program make a moral decision, or the cost of a life, but not so much any more. If it could be done, ie give the program the responsibility of making moral decisions, question then becomes "Whose morals?" It quickly becomes a quagmire.

Best way to sort is out, as is done now with human drivers, is through the legal system, and payouts, if anyone is ever found negligent and/or responsible for the cause of the accident. Light 'black boxes' could become the norm for cars equipped as self driving as a means to provide evidence.
 
  • #251
256bits said:
No driver can, and neither could a self driving car, do the necessary calculations in the short time allocated.
That is why there is always a "fail-safe" maneuver which - in the case of humans - is to do what most likely will protect itself. It is hardwired, a reflex. If you use AI, you will have to program one of these maneuvers: In the event I can't decide what to do, what do I do?

For my part, when I face a moral dilemma, I can always count on The Simpsons to show me the way. And here is how The Simpsons believe AI should react in extreme cases:



Save yourself! :biggrin:
 
Last edited:
  • Like
Likes   Reactions: 256bits, FactChecker and nsaspook
  • #252
Another issue with driving in snow is the problem of staying in the correct lane. That is being worked on (see ).

But this approach requires that the lane signature be mapped ahead of time and available to the car. That doesn't seem very realistic to me. I think it would be easier if something was embedded in the pavement that indicated the lane.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
3K
Replies
10
Views
5K
  • · Replies 28 ·
Replies
28
Views
5K
Replies
7
Views
6K
Replies
67
Views
16K
  • · Replies 19 ·
Replies
19
Views
12K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 17 ·
Replies
17
Views
5K