Automation Ethics: Should your car serve you or serve society?

Click For Summary
The discussion centers on the ethical implications of automated driving systems, particularly comparing Tesla's Autopilot and GM's Super Cruise. Consumer Reports rated GM's system higher due to its rigorous driver monitoring and safety features, which some Tesla owners find restrictive. The conversation highlights the ongoing conflict between individual preferences and societal safety, questioning whether personal interests should outweigh collective well-being. Additionally, Tesla's data-driven insurance model raises concerns about privacy and the potential for incentivizing safer driving behaviors. Ultimately, the debate reflects broader themes of autonomy, regulation, and the balance between individual rights and societal responsibilities.
  • #31
anorlunda said:
Problems and bad publicity happens when drivers with a level 2 car act as if it was a higher level.
There in lies the reason engineers are always trying to make their stuff "idiot proof". It's not just a cute phrase.
 
Computer science news on Phys.org
  • #32
anorlunda said:
I believe that many drivers dream of the day when they can have level 5 automation and do other things in the back seat while their car drives them to the destination. The latest releases from both Tesla and GM are at level 2.
A level 2 system that's reliable enough becomes level 3, a level 3 system that can handle enough rare situations becomes level 4. Both are gradual changes in terms of actual improvement, legislation might introduce discrete steps.

Going from 4 to 5 could be very different for different companies. GM's Super Cruise system won't become one, instead they seem to try to go to level 5 directly: Reuters news. Waymo would need to get rid of the mapping requirement for roads. For Tesla the change might be the smallest - the software never relied on accurate maps and it can already handle parking lots and most (all?) road types.
 
  • Like
Likes anorlunda
  • #33
anorlunda said:
Pilots flying manually would not be able to maintain the speed, heading, and altitude restrictions required, nor depended on to stay awake and alert on long flights.
In order to get your pilot's license, you need to do a 400 mile solo triangle. It involves hours of flying and I have never heard of anyone doing it with autopilot - although it would be legal to do so.

Short of autopilot, there is trim. Once you reach your target altitude, bearing, and speed, you trim up the plane to reduce the pilot control forces. At that point, you are flying with very light "finger tip" control input. But even in situations where rough weather prevented finger tip operation, flying for hours was not a problem. It isn't like driving a car. Routine "driving" is easier but procedures (routine and exceptions) are more involved.

Altitude assignments need to be followed to within 50 feet. For a practiced pilot this is not a serious challenge.
Whether the plane is on autopilot or not, the pilot should be contuously monitoring what the plane and the environment are doing. Personally, I find staying alert easier without autopilot than with.
 
  • Like
Likes Klystron
  • #34
Mark44 said:
I'd much rather have a dumb device that does what I tell it to do, not what it "thinks" I want.
My Toyota had a "feature" that the horn would honk non-stop to warn me the battery was low. Of course it would always happen at inconvenient times. What a royal pain in the neck. You'd think it was designed by lawyers.
 
  • Like
Likes symbolipoint
  • #35
bob012345 said:
My Toyota had a "feature" that the horn would honk non-stop to warn me the battery was low.
Which, of course, would drain the battery even more quickly.
 
  • Like
Likes symbolipoint
  • #36
Mark44 said:
Which, of course, would drain the battery even more quickly.
Yes, it must have been designed by lawyers and accountants.

But to answer the original post, if my car is going to serve society first, then society should pay for it.
 
  • Like
Likes jack action, Dale and Tom.G
  • #37
Just a few tweeks, and ...
What Level would you say you are at?

LEVELS OF AUTOMATIONWHO DOES WHAT, WHEN
Level 0The human driver does all the driving.
Level 1WIFE can sometimes assist the human driver with either steering or braking/accelerating, but not both simultaneously.
Level 2WIFE can itself actually control both steering and braking/accelerating simultaneously under some circumstances. The human driver must continue to pay full attention (“monitor the driving environment”) at all times and perform the rest of the driving task.
Level 3WIFE can itself perform all aspects of the driving task under some circumstances. In those circumstances, the human driver must be ready to take back control at any time . In all other circumstances, the human driver performs the driving task.
Level 4WIFE can itself perform all driving tasks and monitor the driving environment – essentially, do all the driving – in certain circumstances. The human need not pay attention in those circumstances.
Level 5WIFE can do all the driving in all circumstances. The human occupants are just passengers and need never be involved in driving.

You could already have an automated assisted drive and not know nor appreciate the technology.
 
  • Like
Likes symbolipoint
  • #38
Is the title question a silly question? Vehicles for "driving" came from using animals for transportation. Maybe the earlier use of animals for transport was as much for the benefit of a group as for the benefit of individual people. This has not changed in modern time.

Time For Some Humor:
One day we may have a self-driving unicycle.
 
  • #39
symbolipoint said:
Time For Some Humor:
One day we may have a self-driving unicycle.
We are half way there...
 
  • #40
anorlunda said:
When society's interest conflict with the individual owner's interest, which takes priority?
I think that it is also important to distinguish between different levels of interest. For example, while a reasonable case can be made that the designers of the car should prioritize the owner’s life over a pedestrian’s life, the owner’s property would not be prioritized over a pedestrian’s life.

To me the difference between the GM and Tesla seemed to be more about user convenience than anything. As far as I know there is too little safety data to say which causes more harm.
 
  • Like
Likes russ_watters and jack action
  • #41
jack action said:
Why anyone would want to be part of a society that does not consider his or her interest? That would be totally absurd.
Strawman alert. The GM model does not exclude your interest, it just limits your ability to inflict damage on the rest of society in satisfying it. You seem to favour a return to the situation in the early years of motoring where pedestrian deaths tended to be viewed as the fault of the pedestrian and an inconvenience for the speeding (and not infrequently drunk) motorist.
 
  • #42
Ophiolite said:
The GM model does not exclude your interest, it just limits your ability to inflict damage on the rest of society in satisfying it.
The point I'm strongly defending is making sure no one thinks having decisions made for you based on someone else's fears is OK. Does the Tesla model inflicts more damages than the GM model? That is the real question that can be answered with data.

As an individual, you assess the situation and, based on your experience, you decide how much you are willing to risk. This evaluation is very personal - and often very different - for every individual and should stay that way in my opinion. If we remove the right for someone to take a risk, you remove all possible progress. It's often easy to identify extreme cases. But people who are too reckless or too careful see all cases as extreme; when in reality the true answer begins with «Well, it depends ...»

The situation at hand is dangerously close to that kind of thinking. Person A is afraid of self-driving car, therefore person B shouldn't have one, even if person B is fine with it. Somehow person A's fear rules the decision process of anyone. When person A hides behind «society» to do so, I think it's wrong.

Implying that person B doesn't care about inflicting damage to person A because he/she doesn't share the same fear is a logical fallacy. It is sometime presented as an appeal to ignorance («We don't know if it is dangerous, therefore we shouldn't take any chance») or as a questionable cause («If they don't take this precaution, then they must want to hurt others»).

Recall OP's statement:
anorlunda said:
So their conclusion was that GM's version is clearly safer.
Where are the facts showing that Tesla's method gives worst results than GM? As far as we know, neither one has more accidents - or even incidents - in tests or the real world. On what basis can someone use the term «clearly»?

And if there are no incidents, why would «society» cares? (Again who is «society»?)
Ophiolite said:
Strawman alert. [...] You seem to favour a return to the situation in the early years of motoring where pedestrian deaths tended to be viewed as the fault of the pedestrian and an inconvenience for the speeding (and not infrequently drunk) motorist.
Mischaracterizing the opponent’s position for the sake of deceiving others IS a strawman argument. Thank you for the example.
 
  • #43
jack action said:
As an individual, you assess the situation and, based on your experience, you decide how much you are willing to risk.

At face value, that would argue against speed limits and other traffic regulations if they were enforced automatically and gave a driver no choice about whether to obey them. As it is, the driver has a choice about taking the risk of getting a ticket, causing a accident etc.

Whatever the merits of that abstract theory, I think auto pilot technology for cars will eventually lead to automatic enforcement of traffic laws by computers. That will still leave people choices - whether to hack the computers in their cars, whether to give up driving etc.
 
  • #44
jack action said:
The point I'm strongly defending is making sure no one thinks having decisions made for you based on someone else's fears is OK.
To me, the key feature of the OP story was that it was private (GM & Tesla) decisions made to be settled in the marketplace, and free of government interference. Bringing in the argument of others forcing decisions upon you really is a strawman argument.
 
  • #45
This is a great example - we have two competing values and are asking machines to choose:
  • Obey the speed limit
  • Avoid accidents
What happens when the best way to avoid an accident is to speed up? And what if you don't know what the best way is? What if you think speeding up is just a pretty good way. People have trouble with these decisions - do we think machines will do any better?

Another (IMO, more likely) decision is whether to execute a maneuver if it a) decreases the probability of an accident, but b) increases the severity of an accident if it occurs. If you say "just look at expectation values", I would counter that you probably know neither the probability nor the impact well.
 
Last edited:
  • Like
Likes symbolipoint
  • #46
Stephen Tashi said:
At face value, that would argue against speed limits and other traffic regulations if they were enforced automatically and gave a driver no choice about whether to obey them.
What is the value of a law that forces you to wait at a red light when there is no one around? If all cars had autopilot, would you trust them to engage in an intersection because the car is aware that it is alone? Why don't we trust humans to do that? Why does the human need a punishment for making a decision without any consequence?

Interesting anecdote: One of my cousin lived in England for a year and told me there were basically no stop signs there. Every intersections was basically treated as a roundabout where the car at your right has priority over you.

For people like us living in Québec, Canada, it seems chaotic. How do they do it? Here, people panic when there are no stop signs. Some trials about 'right of way' in residential area create confusion. Roundabout have been installed lately and a lot of people panic.

But actually, we were doing exactly the same thing as in England before the 80's, without realizing it. There were stop signs everywhere but, back then, everybody was doing what we called an 'American stop', i.e. we slowed down and if there was nobody, we would engage without halting. Priority was given to the first car arriving at the intersection. There was no laws regulating it, it was just common knowledge.

But in the 80's some bright police officer actually read the book of law and found out that it said that a vehicle must 'halt' at a stop sign. An opportunity for tickets arose. The notion soon spread all over the police force and ticket traps were set everywhere. Things like 'you must halt and count to 3 before going on' were some of the things you would be told by police officers (not in the law). Suddenly, everyone not making a full stop was a dangerous driver. The notion sank in and now is well accepted. The police has now stopped giving tickets for stop signs (I can't remember the last time I heard about someone receiving one). But what have we gain as a society? People that are so afraid of intersections without stop signs that they panic. Somehow, people do not trust that other people will stop for them. "There are so many crazy drivers out there!"

This is a case where laws make no sense. Laws can be abused for other reasons than what they were intended for. People are not stupid. They don't drive to create accidents. Humans - although not without flaws - are more able to accomplish complex tasks than we gave them credit for. But it is easy to destroy their confidence. The same goes with car companies (which are also run by humans). They don't build cars to kill people.
Stephen Tashi said:
That will still leave people choices - whether to hack the computers in their cars, whether to give up driving etc.
These are not choices about driving. These are choices about breaking the law (being an outlaw) or not participating (being an outcast). It sounds more like extortion to me than freedom of choice.

Who make those laws, anyway? Who decided over X km/h is too fast? How do you impose the same limit to a 20-year-old pickup truck and to a brand new Porsche? An old tired man vs an alert young woman?
Vanadium 50 said:
This is a great example - we have two competing values and are asking machines to choose:
  • Obey the speed limit
  • Avoid accidents
This is where it all starts, by opposing two unrelated values: If you go over the speed limit, will you cause an accident? OR if you stay below the speed limit, do you avoid an accident?

There is no direct relationship between the two. There is no line drawn in the sand that can make such a clear distinction. You can have an accident regardless of the speed you are driving. There are a lot more variables in the equation. The only direct relationship between speed and accidents is the gravity of the consequences WHEN you have an accident.

This is again a causal fallacy: It is not a choice between respecting the speed limit or having an accident. Nobody chooses to have an accident, regardless of their speed.
anorlunda said:
To me, the key feature of the OP story was that it was private (GM & Tesla) decisions made to be settled in the marketplace, and free of government interference. Bringing in the argument of others forcing decisions upon you really is a strawman argument.
You seem to think governments take better decisions than the people they represent. People driving cars or running private companies are the same one who are elected. So such hypothesis never made any sense to me and there is no data supporting it (or there is an equal amount of data disproving it, if you prefer).

You think this:
anorlunda said:
we can have a SELFISH/ALTRUISTIC toggle switch on all our automated devices. If you choose SELFISH, it will cost you an additional $100/hour, but you are allowed to choose. For rich people, the fee might be progressive and expressed in percent of your net worth. That might be the way to manage the question of automation ethics if we can't ever agree.
is not about others forcing decisions?

I wonder in which category you place yourself? Nobody thinks he is in the SELFISH category. Nobody. The truth behind that 'managing the question of automation ethics' is really about everyone else agreeing with you, and those who won't, we'll just make them pay until they regain their senses and act like you wish. Do you really think someone will 'choose' to pay 100 $/hour because they don't agree with you?

One's life is so much easier when the law agree with his/her views or actions. Sadly, laws are always a very poor solution for people who disagree with each other and is certainly a poor way of 'living together' in society.
 
  • Sad
Likes Dale
  • #47
bob012345 said:
But to answer the original post, if my car is going to serve society first, then society should pay for it.
Yes. In a perfect world, you should pay society for the privilege of taking a minimally safe car out on the public highways. You should then be paid a rebate for a car that is better than required or for driving behavior or automation options that are extra safe. You should be penalized for unsafe equipment or behavior.

To some extent, laws and insurance companies make this a reality today.
 
Last edited:
  • Like
Likes bob012345 and russ_watters

Similar threads

  • · Replies 22 ·
Replies
22
Views
5K
  • · Replies 19 ·
Replies
19
Views
12K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 20 ·
Replies
20
Views
4K
  • · Replies 3 ·
Replies
3
Views
5K
Replies
42
Views
8K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
10K
  • · Replies 33 ·
2
Replies
33
Views
6K
  • · Replies 42 ·
2
Replies
42
Views
11K