# Automation Ethics: Should your car serve you or serve society?

• Featured
Staff Emeritus
Tesla’s Autopilot a ‘distant second’ to GM’s Super Cruise system in Consumer Reports testing

I think this is very interesting because of Consumer Report's reason for their evaluation.

• The GM autopilot aggressively monitors the driver to make sure he/she is alert and paying attention.
• The GM autopilot can be used only on preplanned roads and not on residential suites.
So their conclusion was that GM's version is clearly safer. But Tesla owners don't want to be monitored nor restricted in what they do. They say it is absurd to give better ratings to an autopilot because it does less. I'm tempted to generalize and say that if you are the car owner, you would prefer the Tesla, but if you are not the owner, you prefer the GM.

It is a theme we've heard before. When society's interest conflict with the individual owner's interest, which takes priority? We'll hear this question it again and again in many ways in the future. There is no answer that we can all agree on all of the time. Not ever.

Another way this question comes forward again involves Tesla. Tesla is selling their own brand of auto insurance. But Tesla has information that other insurance companies don't. It knows how fast you accelerated, how close you come to other cars, how close to pedestrians, how fast you round each curve in the road. They know how often the car reminded you to stay alert. That enables Tesla to compute the price for the insurance premiums for each driver. They can give deep discounts to safe drivers and sky high prices for dangerous ones. For young people especially, car insurance is much higher than the price of the car plus the cost of operation. If they display the cost in the car in real time, they might even persuade dangerous drivers to become safer. Of course, creepiness and invasion of privacy is the other side of the coin.

Perhaps we can have a SELFISH/ALTRUISTIC toggle switch on all our automated devices. If you choose SELFISH, it will cost you an additional $100/hour, but you are allowed to choose. For rich people, the fee might be progressive and expressed in percent of your net worth. That might be the way to manage the question of automation ethics if we can't ever agree. russ_watters ## Answers and Replies f95toli Science Advisor Gold Member It is in a interesting question. In the Amazon Prime sci-fi show "Upload" all cars have a button that allows you to switch between "prioritise driver" and "prioritise pedestrians". That is, the choice is up to the owner of the car as long as you follow all rules and regulations. Upload is a not-so-serious sci-fi show, not a serious contribution to the debate. That said, I think it is quite an interesting idea. It is a possible approach to a problem where no solution can be found using only facts/logic. you would prefer the Tesla Yes, I want that Tesla. . . But only if I. . . can have a . SELFISH/ALTRUISTIC. .LIKE A BAT OUT OF HELL toggle switch Lol. . . .. PS: There is no such concept as "Ethics" when .driving..riding in an autopiloted Telsa. . . So I'd also want a bull bar. . . Mounted on the front, which should suffice for. . . "prioritise pedestrians" . . . Last edited: anorlunda jack action Science Advisor Gold Member When society's interest conflict with the individual owner's interest, which takes priority? We'll hear this question it again and again in many ways in the future. There is no answer that we can all agree on all of the time. Not ever. Why anyone would want to be part of a society that does not consider his or her interest? That would be totally absurd. If you believe in liberty and democracy, respecting individual's own interest should never be up for discussion. When someone says "society's interest", in the end, it always serves someone's interest at the expense of others. But Tesla has information that other insurance companies don't. It knows how fast you accelerated, how close you come to other cars, how close to pedestrians, how fast you round each curve in the road. They know how often the car reminded you to stay alert. That enables Tesla to compute the price for the insurance premiums for each driver. They can give deep discounts to safe drivers and sky high prices for dangerous ones. For young people especially, car insurance is much higher than the price of the car plus the cost of operation. If they display the cost in the car in real time, they might even persuade dangerous drivers to become safer. Of course, creepiness and invasion of privacy is the other side of the coin. I personally have no problems with this kind of behavior, as long as I still have the freedom to NOT choose Tesla or any other connected vehicle. The problem I have is the fact that I cannot build the type of vehicle I want for me. Every decision I can make is wrapped up in laws that have already decided what is best for me (or "society"?). In such conditions, my only option is to not have a car (i.e. not participating in society, being an outcast), or to ignore the laws (i.e. living in a parallel society with its own laws). Either case is not good for the group. The only group I want to be part of is one where you could only convince others of your interpretations about the world we live in, not force them to adopt your ways. russ_watters Mark44 Mentor I'm in a similar quandary about automation -- not with cars, though, but with a new thermostat. The furnace stopped working at our house a couple of weeks ago, and it was determined that the main circuit board was out. Rather than replace the board (about$900) for a furnace that was 22 years old, my wife and I opted to spend a bit more to get a new furnace.
The thermostat that came with the new furnace is easier to read and program than the one it replaces, but it's really too clever by half. I programmed it to turn on the furnace at 5am, when my wife gets up. She reported to me that the furnace came on at 3:45. I adjusted the start time to 5:30am, but the furnace still came on well before that time.
After calling the company that installed the furnace, I found out that it's a "smart" thermostat, one that calculates when to turn on the furnace so as to get the house to the desired temperature at the time you set.
All I wanted was for the furnace to turn on at the time I set, not for the Tstat to try to guess when to turn on the furnace. I've been able to outsmart this "smart" device, but setting the start time late enough so that the furnace comes on about when I want it to. I'd much rather have a dumb device that does what I tell it to do, not what it "thinks" I want.

Last edited:
ChemAir, jbriggs444, russ_watters and 5 others
Tom.G
All I wanted was for the furnace to turn on at the time I set, not for the Tstat to try to guess when to turn on the furnace.
Reducing the options to the obvious:
• Tell the installer to put in the thermostat functionality you want or you will either
• Stop payment
• Blanket the social networks with complaints
• Cave in and live with it

PeroK
Homework Helper
Gold Member
2020 Award
After calling the company that installed the furnace, I found out that it's a "smart" thermostat, one that calculates when to turn on the furnace so as to get the house to the desired temperature at the time you set.
All I wanted was for the furnace to turn on at the time I set, not for the Tstat to try to guess when to turn on the furnace. I've been able to outsmart this "smart" device, but setting the start time late enough so that the furnace comes on about when I want it to. I'd much rather have a dumb device that does what I tell it to do, not what it "thinks" I want.

The most important aspect of any automated system is the manual override!

russ_watters, mfb, sysprog and 1 other person
Staff Emeritus
I'd much rather have a dumb device that does what I tell it to do, not what it "thinks" I want.
I am still startled when I flip a switch or push a button to have it take a perceivable time to have some effect. I grew up with lights that lit the instant you flipped the switch. The idea that my button push merely notifies a smart device of a request, and that the device will think about it before acting is alien.

Averagesupernova, sysprog and symbolipoint
Mark44
Mentor
The most important aspect of any automated system is the manual override!
I think I found one. The user manual for the thermostat lists 63 ISU (installer setup) options, one of which seems to be the override for the 'stat's cleverness.
I am still startled when I flip a switch or push a button to have it take a perceivable time to have some effect. I grew up with lights that lit the instant you flipped the switch. The idea that my button push merely notifies a smart device of a request, and that the device will think about it before acting is alien.
If you're talking about LED lights vs. incandescent lights, there is a noticeable delay before some LED lights illuminate. I don't know what the mechanism for LED lights is, but I doubt that they are doing any thinking.

Dr. Courtney
Gold Member
2020 Award
The free market should sort this out. Of course the free market includes the insurance companies who are free to set their rates based on their assessment of risks.

symbolipoint
Staff Emeritus
The free market should sort this out

Not sure history is on your side here. Airbags were not popular until they were mandated.

russ_watters
Dr. Courtney
Gold Member
2020 Award
Not sure history is on your side here. Airbags were not popular until they were mandated.

The litigation environment is much different now than when air bags were optional. Insurance companies are also much more pro-active in understanding the risks of each specific customer rather than the broad categories used back then. The air bag issue was not so relevant to liability litigation for others endangered by faulty systems. Not ever accident gave rise to a lawsuit.

With the autopilot systems, nearly every accident that injuries third parties will give rise to a lawsuit. The ambulance chasing lawyers are lined up and ready to go.

DaveE
Gold Member
Why anyone would want to be part of a society that does not consider his or her interest? That would be totally absurd.
And why anyone would want to be part of a society that does not consider the interests of others and the common good? That would be totally absurd.

This is the fundamental dilemma which @anorlunda referred to. Society must balance the interests of individuals vs. other individuals and the collective group. There is no simple answer. This is the fundamental raison d'être of politics.

All add, as an aside: The issue of altruism in evolution is a fascinating subject related to this. Even our DNA has to struggle with the individual vs. the collective good.

russ_watters and anorlunda
.Scott
Homework Helper
She reported to me that the furnace came on at 3:45. I adjusted the start time to 5:30am, but the furnace still came on well before that time.
After calling the company that installed the furnace, I found out that it's a "smart" thermostat, one that calculates when to turn on the furnace so as to get the house to the desired temperature at the time you set.
Most programmable thermostats of recent model years allow you to select whether you want the "early" feature. The people who installed your new thermostat would have left you the instructions. Failing that, you can get the manual online.
For example, I use a Lux TX1500Uc. The "early recovery" feature is shown in the TX1500Uc Thermostat Manual on page 20, item number 5.

Edit: I just read your post from 6 hours ago. So you found the feature. Good that you found it - no so good that the people who installed it couldn't have given you better support.

.Scott
Homework Helper
I've been part of the development of automotive radar - although more the basic radar features end than with the code that makes such "ethical decisions". I have also coded ECDIS-N systems - marine navigation systems that can drive the autopilot.

With the ECDIS and ECDIS-N systems, the functionality is regulated and is specified very precisely. The objective is to reliably and expertly maneuver the ship along a prepared and properly reviewed course. I don't recall any "ethical" decisions made by these systems. The course is checked against maps and bathometric data to make sure that the ship will not run aground. But no attempt is made to determine whether the pilot is watching what is happening or whether the course conflicts with other shipping. So avoiding accidents such as the The USS Fitgerald / MV Crystal collision are entirely up to the crew.

Of course, with a Tesla, the liability is with the driver. He is required to stay awake and avoid driving the car into other people and property - and to follow the rules of the road. The potential for the driver to violate this trust did not start with the introduction of driver-assistance features. It started as soon as someone took a motor vehicle onto public roads.

Still, I am not happy with what Tesla has done. It started with calling their system "autopilot" - a clear suggestion that drivers can use this system as aircraft pilots are shown using their autopilots in movies and TV.

At some point, these systems will start to move from "driver assistance" to "automated driving". That will happen when these systems can accumulate a better driving record than a large majority of human drivers.

Last edited by a moderator:
DaveE
Gold Member
Of course, with a Tesla, the liability is with the driver. He is required to stay awake and avoid driving the car into other people and property - and to follow the rules of the road.
Yes, that's how Tesla wants it, and, as you said, that is in fact all these systems can really do so far.

However, from a systems design perspective I'm not OK with expecting humans to reliably do something they aren't very capable of. In particular expecting them to pay attention while almost always doing nothing. This doesn't seem to different than expecting a steel beam in a building to almost always be strong enough.

- An airline would be held liable if it allowed an unqualified pilot to fly.
- 737Max crashes could have been avoided if only the pilots would have followed the proper "run-away trim" procedure, yet in practice Boeing has significant liability.
- I could loose a lawsuit if I had created an "attractive nuisance". Like not putting a fence around a swimming pool, expecting parents to always watch their kids. Or perhaps like designing a dangerous machine that is only safe if you are a perfect operator.

Society needs to figure out how to share this liability among all parties. That's why we have governments, laws, regulators, etc.

I think Tesla has been getting of easy in the PR aspect of automation. Several other auto manufacturers are being more careful vis-à-vis the human element.

berkeman
Mark44
Mentor
Good that you found it - no so good that the people who installed it couldn't have given you better support.
The unit is a Honeywell thermostat. The installers left the user manuals, in English, Spanish, and French. I was able to go through the 63 installer setup options to find the one that did what I want.

Staff Emeritus
The installers left the user manuals, in English, Spanish, and French.

Are they the same? I had an appliance with manuals in French and German. In a particular situation, the German manual explained how to fix it. The French manual said to call the repairman.

So avoiding accidents such as the The USS Fitgerald / MV Crystal collision are entirely up to the crew.

That did not end well.

I wonder how much of the problem is the sense of complacency that the automation will surely alert us if something goes wrong.

sysprog
Mark44
Mentor
Are they the same?
No idea. The English version had the info I needed, so I didn't go to the bother of translating the Spanish version, or try to translate the French version. My Spanish is much better than my French.

Staff Emeritus
although more the basic radar features end than with the code that makes such "ethical decisions".
In this case, the OP says that it was not software making the ethical decision, but rather the managements of GM and Tesla who decided what to allow their systems to do and how to prioritize safety versus useful functionality.

.Scott
Homework Helper
However, from a systems design perspective I'm not OK with expecting humans to reliably do something they aren't very capable of. In particular expecting them to pay attention while almost always doing nothing. This doesn't seem to different than expecting a steel beam in a building to almost always be strong enough.
I agree. And this impacts aircraft autopilot systems as well. It is very troubling that airlines expect their pilots to use their flight directors all the time - because they conserve fuel. The result is that the pilot only gets hands-on flying in a simulator or when something happens that kicks out the autopilot. I was completely unsurprised when an accident such as Air France 447 happened because the crew did not recognize a stall condition until it was too late.
I think Tesla has been getting of easy in the PR aspect of automation. Several other auto manufacturers are being more careful vis-à-vis the human element.
I agree - but the standards are set low. Misuse of the "autopilot" is not enough to make the Tesla an overall unsafe vehicle. In fact, it is recognized as very safe, although only in very minor part due to the driver assistance.

Last edited:
russ_watters and DaveE
.Scott
Homework Helper
In this case, the OP says that it was not software making the ethical decision, but rather the managements of GM and Tesla who decided what to allow their systems to do and how to prioritize safety versus useful functionality.
This is entirely true. Radar units are provided to auto manufacturers with inherent abilities to detect and track the lane, other vehicles, and other objects - and scores of other things. But top level code from the automaker is compiled with the radar code. The final firmware image is produced and tested by the automaker - although this may involve substantial tech support from the radar manufacturer.

jack action
Gold Member
And why anyone would want to be part of a society that does not consider the interests of others and the common good? That would be totally absurd.
It is exactly what I said: When you join a group, it shall consider your interest. That would be true for anyone joining the group.

The same statement I said, in other words: why would anyone wants to be in a society that doesn't consider the interest of some of its members?
This is the fundamental dilemma which @anorlunda referred to. Society must balance the interests of individuals vs. other individuals and the collective group. There is no simple answer. This is the fundamental raison d'être of politics.
There is no dilemma. There is no «who in the group will take one for the team?» There are no martyrs. In a group, whatever you put in common, everyone shall gain something from it.

If we cannot find common ground, we don't put this specific subject in common. Period. It's not because we agree on, say, a justice system that we have to agree on an educational system. Not imposing your views on people that don't agree with you, that should be the fundamental raison d'être of politics.

@anorlunda 's wording was «When society's interest conflict [...]». Who is deciding for the «society»? And what is that «interest»? Usually - not saying it is the case here - «society's interest» is most likely a neat way of disguising the terms «my interest», but ennobling it a little bit. «Don't do it for me, do it for society.»

This is like the question about self-driving cars: Given a choice should it protect the occupants or the people surrounding the car? Easy, the occupants. Always. How can I be sure? With one simple example. You buy a self-driving car and you put your kids in it. There is no way a parent will say: «Yes, if need be, this car that I bought can sacrifice my kids for the life of others.» That would be totally insane. There is a self-driving car out there that could kill my kids playing outside because it chose to save its occupants instead? Welcome to life, danger is everywhere and you have to learn to live with it. Sometimes you loose. If I bought that car and my kids were in it, I wouldn't want that car to hesitate about saving them or not. I can accept that there is danger out there that I don't control, but making a self-conscious decision (for example buying a tool or a service) that I know may go willingly against my interest? That is absurd.

Praise of martyrs is a selling point for people who want to gain something at the expense of somebody else. Always.

Staff Emeritus
Let's stick with the scenario in the OP rather than a hypothetical kill A rather than kill B.

The OP said that GM's autopilot will work only on pre-planned roads, which do not include residential streets. That restricts the functionality and convenience to limit use to the safest cases. The , parents with kids living on the residential streets can be considered to be "society" in this scenario, and the owner of the car who wants full-time autopilot the individual.

russ_watters and DaveE
symbolipoint
Homework Helper