Automation Ethics: Should your car serve you or serve society?

In summary: I think it's a great idea. Tesla is selling their own brand of auto insurance. If you choose SELFISH, it will cost you an additional $100/hour, but you are allowed to choose. For rich people, the fee might be progressive and expressed in percent of your net worth. That might be the way to manage the question of automation ethics if we can't ever agree.
  • #1
anorlunda
Staff Emeritus
Insights Author
11,308
8,730
Tesla’s Autopilot a ‘distant second’ to GM’s Super Cruise system in Consumer Reports testing

I think this is very interesting because of Consumer Report's reason for their evaluation.

  • The GM autopilot aggressively monitors the driver to make sure he/she is alert and paying attention.
  • The GM autopilot can be used only on preplanned roads and not on residential suites.
So their conclusion was that GM's version is clearly safer. But Tesla owners don't want to be monitored nor restricted in what they do. They say it is absurd to give better ratings to an autopilot because it does less. I'm tempted to generalize and say that if you are the car owner, you would prefer the Tesla, but if you are not the owner, you prefer the GM.

It is a theme we've heard before. When society's interest conflict with the individual owner's interest, which takes priority? We'll hear this question it again and again in many ways in the future. There is no answer that we can all agree on all of the time. Not ever.

Another way this question comes forward again involves Tesla. Tesla is selling their own brand of auto insurance. But Tesla has information that other insurance companies don't. It knows how fast you accelerated, how close you come to other cars, how close to pedestrians, how fast you round each curve in the road. They know how often the car reminded you to stay alert. That enables Tesla to compute the price for the insurance premiums for each driver. They can give deep discounts to safe drivers and sky high prices for dangerous ones. For young people especially, car insurance is much higher than the price of the car plus the cost of operation. If they display the cost in the car in real time, they might even persuade dangerous drivers to become safer. Of course, creepiness and invasion of privacy is the other side of the coin.

Perhaps we can have a SELFISH/ALTRUISTIC toggle switch on all our automated devices. If you choose SELFISH, it will cost you an additional $100/hour, but you are allowed to choose. For rich people, the fee might be progressive and expressed in percent of your net worth. That might be the way to manage the question of automation ethics if we can't ever agree.
 
  • Like
Likes russ_watters
Computer science news on Phys.org
  • #2
It is in a interesting question. In the Amazon Prime sci-fi show "Upload" all cars have a button that allows you to switch between "prioritise driver" and "prioritise pedestrians". That is, the choice is up to the owner of the car as long as you follow all rules and regulations.

Upload is a not-so-serious sci-fi show, not a serious contribution to the debate. That said, I think it is quite an interesting idea. It is a possible approach to a problem where no solution can be found using only facts/logic.
 
  • #3
anorlunda said:
you would prefer the Tesla

Yes, I want that Tesla. . .

But only if I. . .
anorlunda said:
can have a . SELFISH/ALTRUISTIC. .LIKE A BAT OUT OF HELL toggle switch

Lol. . . .🦇.
PS:
There is no such concept as "Ethics" when .driving..riding in an autopiloted
Telsa. . .So I'd also want a bull bar. . .

1603979570921.png


Mounted on the front, which should suffice for. . . "prioritise pedestrians" . . :-p

.
 
Last edited:
  • Haha
Likes anorlunda
  • #4
anorlunda said:
When society's interest conflict with the individual owner's interest, which takes priority? We'll hear this question it again and again in many ways in the future. There is no answer that we can all agree on all of the time. Not ever.
Why anyone would want to be part of a society that does not consider his or her interest? That would be totally absurd.

If you believe in liberty and democracy, respecting individual's own interest should never be up for discussion. When someone says "society's interest", in the end, it always serves someone's interest at the expense of others.
anorlunda said:
But Tesla has information that other insurance companies don't. It knows how fast you accelerated, how close you come to other cars, how close to pedestrians, how fast you round each curve in the road. They know how often the car reminded you to stay alert. That enables Tesla to compute the price for the insurance premiums for each driver. They can give deep discounts to safe drivers and sky high prices for dangerous ones. For young people especially, car insurance is much higher than the price of the car plus the cost of operation. If they display the cost in the car in real time, they might even persuade dangerous drivers to become safer. Of course, creepiness and invasion of privacy is the other side of the coin.
I personally have no problems with this kind of behavior, as long as I still have the freedom to NOT choose Tesla or any other connected vehicle. The problem I have is the fact that I cannot build the type of vehicle I want for me. Every decision I can make is wrapped up in laws that have already decided what is best for me (or "society"?). In such conditions, my only option is to not have a car (i.e. not participating in society, being an outcast), or to ignore the laws (i.e. living in a parallel society with its own laws). Either case is not good for the group.

The only group I want to be part of is one where you could only convince others of your interpretations about the world we live in, not force them to adopt your ways.
 
  • Like
Likes russ_watters
  • #5
I'm in a similar quandary about automation -- not with cars, though, but with a new thermostat. The furnace stopped working at our house a couple of weeks ago, and it was determined that the main circuit board was out. Rather than replace the board (about $900) for a furnace that was 22 years old, my wife and I opted to spend a bit more to get a new furnace.
The thermostat that came with the new furnace is easier to read and program than the one it replaces, but it's really too clever by half. I programmed it to turn on the furnace at 5am, when my wife gets up. She reported to me that the furnace came on at 3:45. I adjusted the start time to 5:30am, but the furnace still came on well before that time.
After calling the company that installed the furnace, I found out that it's a "smart" thermostat, one that calculates when to turn on the furnace so as to get the house to the desired temperature at the time you set.
All I wanted was for the furnace to turn on at the time I set, not for the Tstat to try to guess when to turn on the furnace. I've been able to outsmart this "smart" device, but setting the start time late enough so that the furnace comes on about when I want it to. I'd much rather have a dumb device that does what I tell it to do, not what it "thinks" I want.
 
Last edited:
  • Like
Likes ChemAir, jbriggs444, russ_watters and 5 others
  • #6
Mark44 said:
All I wanted was for the furnace to turn on at the time I set, not for the Tstat to try to guess when to turn on the furnace.
Reducing the options to the obvious:
  • Tell the installer to put in the thermostat functionality you want or you will either
    • Stop payment
    • Blanket the social networks with complaints
  • Buy your own thermostat and install it
  • Cave in and live with it
 
  • #7
Mark44 said:
After calling the company that installed the furnace, I found out that it's a "smart" thermostat, one that calculates when to turn on the furnace so as to get the house to the desired temperature at the time you set.
All I wanted was for the furnace to turn on at the time I set, not for the Tstat to try to guess when to turn on the furnace. I've been able to outsmart this "smart" device, but setting the start time late enough so that the furnace comes on about when I want it to. I'd much rather have a dumb device that does what I tell it to do, not what it "thinks" I want.

The most important aspect of any automated system is the manual override!
 
  • Like
Likes russ_watters, mfb, sysprog and 1 other person
  • #8
Mark44 said:
I'd much rather have a dumb device that does what I tell it to do, not what it "thinks" I want.
I am still startled when I flip a switch or push a button to have it take a perceivable time to have some effect. I grew up with lights that lit the instant you flipped the switch. The idea that my button push merely notifies a smart device of a request, and that the device will think about it before acting is alien.
 
  • Like
Likes Averagesupernova, sysprog and symbolipoint
  • #9
PeroK said:
The most important aspect of any automated system is the manual override!
I think I found one. The user manual for the thermostat lists 63 ISU (installer setup) options, one of which seems to be the override for the 'stat's cleverness.
anorlunda said:
I am still startled when I flip a switch or push a button to have it take a perceivable time to have some effect. I grew up with lights that lit the instant you flipped the switch. The idea that my button push merely notifies a smart device of a request, and that the device will think about it before acting is alien.
If you're talking about LED lights vs. incandescent lights, there is a noticeable delay before some LED lights illuminate. I don't know what the mechanism for LED lights is, but I doubt that they are doing any thinking.
 
  • #10
The free market should sort this out. Of course the free market includes the insurance companies who are free to set their rates based on their assessment of risks.
 
  • Wow
Likes symbolipoint
  • #11
Dr. Courtney said:
The free market should sort this out

Not sure history is on your side here. Airbags were not popular until they were mandated.
 
  • Like
Likes russ_watters
  • #12
Vanadium 50 said:
Not sure history is on your side here. Airbags were not popular until they were mandated.

The litigation environment is much different now than when air bags were optional. Insurance companies are also much more pro-active in understanding the risks of each specific customer rather than the broad categories used back then. The air bag issue was not so relevant to liability litigation for others endangered by faulty systems. Not ever accident gave rise to a lawsuit.

With the autopilot systems, nearly every accident that injuries third parties will give rise to a lawsuit. The ambulance chasing lawyers are lined up and ready to go.
 
  • #13
jack action said:
Why anyone would want to be part of a society that does not consider his or her interest? That would be totally absurd.
And why anyone would want to be part of a society that does not consider the interests of others and the common good? That would be totally absurd.

This is the fundamental dilemma which @anorlunda referred to. Society must balance the interests of individuals vs. other individuals and the collective group. There is no simple answer. This is the fundamental raison d'être of politics.

All add, as an aside: The issue of altruism in evolution is a fascinating subject related to this. Even our DNA has to struggle with the individual vs. the collective good.
 
  • Like
Likes russ_watters and anorlunda
  • #14
Mark44 said:
She reported to me that the furnace came on at 3:45. I adjusted the start time to 5:30am, but the furnace still came on well before that time.
After calling the company that installed the furnace, I found out that it's a "smart" thermostat, one that calculates when to turn on the furnace so as to get the house to the desired temperature at the time you set.
Most programmable thermostats of recent model years allow you to select whether you want the "early" feature. The people who installed your new thermostat would have left you the instructions. Failing that, you can get the manual online.
For example, I use a Lux TX1500Uc. The "early recovery" feature is shown in the TX1500Uc Thermostat Manual on page 20, item number 5.

Edit: I just read your post from 6 hours ago. So you found the feature. Good that you found it - no so good that the people who installed it couldn't have given you better support.
 
  • #15
I've been part of the development of automotive radar - although more the basic radar features end than with the code that makes such "ethical decisions". I have also coded ECDIS-N systems - marine navigation systems that can drive the autopilot.

With the ECDIS and ECDIS-N systems, the functionality is regulated and is specified very precisely. The objective is to reliably and expertly maneuver the ship along a prepared and properly reviewed course. I don't recall any "ethical" decisions made by these systems. The course is checked against maps and bathometric data to make sure that the ship will not run aground. But no attempt is made to determine whether the pilot is watching what is happening or whether the course conflicts with other shipping. So avoiding accidents such as the The USS Fitgerald / MV Crystal collision are entirely up to the crew.

Of course, with a Tesla, the liability is with the driver. He is required to stay awake and avoid driving the car into other people and property - and to follow the rules of the road. The potential for the driver to violate this trust did not start with the introduction of driver-assistance features. It started as soon as someone took a motor vehicle onto public roads.

Still, I am not happy with what Tesla has done. It started with calling their system "autopilot" - a clear suggestion that drivers can use this system as aircraft pilots are shown using their autopilots in movies and TV.

At some point, these systems will start to move from "driver assistance" to "automated driving". That will happen when these systems can accumulate a better driving record than a large majority of human drivers.
 
Last edited by a moderator:
  • #16
.Scott said:
Of course, with a Tesla, the liability is with the driver. He is required to stay awake and avoid driving the car into other people and property - and to follow the rules of the road.
Yes, that's how Tesla wants it, and, as you said, that is in fact all these systems can really do so far.

However, from a systems design perspective I'm not OK with expecting humans to reliably do something they aren't very capable of. In particular expecting them to pay attention while almost always doing nothing. This doesn't seem to different than expecting a steel beam in a building to almost always be strong enough.

- An airline would be held liable if it allowed an unqualified pilot to fly.
- 737Max crashes could have been avoided if only the pilots would have followed the proper "run-away trim" procedure, yet in practice Boeing has significant liability.
- I could loose a lawsuit if I had created an "attractive nuisance". Like not putting a fence around a swimming pool, expecting parents to always watch their kids. Or perhaps like designing a dangerous machine that is only safe if you are a perfect operator.

Society needs to figure out how to share this liability among all parties. That's why we have governments, laws, regulators, etc.

I think Tesla has been getting of easy in the PR aspect of automation. Several other auto manufacturers are being more careful vis-à-vis the human element.
 
  • Like
Likes berkeman
  • #17
.Scott said:
Good that you found it - no so good that the people who installed it couldn't have given you better support.
The unit is a Honeywell thermostat. The installers left the user manuals, in English, Spanish, and French. I was able to go through the 63 installer setup options to find the one that did what I want.
 
  • #18
Mark44 said:
The installers left the user manuals, in English, Spanish, and French.

Are they the same? I had an appliance with manuals in French and German. In a particular situation, the German manual explained how to fix it. The French manual said to call the repairman.

.Scott said:
So avoiding accidents such as the The USS Fitgerald / MV Crystal collision are entirely up to the crew.

That did not end well.

I wonder how much of the problem is the sense of complacency that the automation will surely alert us if something goes wrong.
 
  • Like
Likes sysprog
  • #19
Vanadium 50 said:
Are they the same?
No idea. The English version had the info I needed, so I didn't go to the bother of translating the Spanish version, or try to translate the French version. My Spanish is much better than my French.
 
  • #20
.Scott said:
although more the basic radar features end than with the code that makes such "ethical decisions".
In this case, the OP says that it was not software making the ethical decision, but rather the managements of GM and Tesla who decided what to allow their systems to do and how to prioritize safety versus useful functionality.
 
  • #21
DaveE said:
However, from a systems design perspective I'm not OK with expecting humans to reliably do something they aren't very capable of. In particular expecting them to pay attention while almost always doing nothing. This doesn't seem to different than expecting a steel beam in a building to almost always be strong enough.
I agree. And this impacts aircraft autopilot systems as well. It is very troubling that airlines expect their pilots to use their flight directors all the time - because they conserve fuel. The result is that the pilot only gets hands-on flying in a simulator or when something happens that kicks out the autopilot. I was completely unsurprised when an accident such as Air France 447 happened because the crew did not recognize a stall condition until it was too late.
DaveE said:
I think Tesla has been getting of easy in the PR aspect of automation. Several other auto manufacturers are being more careful vis-à-vis the human element.
I agree - but the standards are set low. Misuse of the "autopilot" is not enough to make the Tesla an overall unsafe vehicle. In fact, it is recognized as very safe, although only in very minor part due to the driver assistance.
 
Last edited:
  • Like
Likes russ_watters and DaveE
  • #22
anorlunda said:
In this case, the OP says that it was not software making the ethical decision, but rather the managements of GM and Tesla who decided what to allow their systems to do and how to prioritize safety versus useful functionality.
This is entirely true. Radar units are provided to auto manufacturers with inherent abilities to detect and track the lane, other vehicles, and other objects - and scores of other things. But top level code from the automaker is compiled with the radar code. The final firmware image is produced and tested by the automaker - although this may involve substantial tech support from the radar manufacturer.
 
  • #23
DaveE said:
And why anyone would want to be part of a society that does not consider the interests of others and the common good? That would be totally absurd.
It is exactly what I said: When you join a group, it shall consider your interest. That would be true for anyone joining the group.

The same statement I said, in other words: why would anyone wants to be in a society that doesn't consider the interest of some of its members?
DaveE said:
This is the fundamental dilemma which @anorlunda referred to. Society must balance the interests of individuals vs. other individuals and the collective group. There is no simple answer. This is the fundamental raison d'être of politics.
There is no dilemma. There is no «who in the group will take one for the team?» There are no martyrs. In a group, whatever you put in common, everyone shall gain something from it.

If we cannot find common ground, we don't put this specific subject in common. Period. It's not because we agree on, say, a justice system that we have to agree on an educational system. Not imposing your views on people that don't agree with you, that should be the fundamental raison d'être of politics.

@anorlunda 's wording was «When society's interest conflict [...]». Who is deciding for the «society»? And what is that «interest»? Usually - not saying it is the case here - «society's interest» is most likely a neat way of disguising the terms «my interest», but ennobling it a little bit. «Don't do it for me, do it for society.»

This is like the question about self-driving cars: Given a choice should it protect the occupants or the people surrounding the car? Easy, the occupants. Always. How can I be sure? With one simple example. You buy a self-driving car and you put your kids in it. There is no way a parent will say: «Yes, if need be, this car that I bought can sacrifice my kids for the life of others.» That would be totally insane. There is a self-driving car out there that could kill my kids playing outside because it chose to save its occupants instead? Welcome to life, danger is everywhere and you have to learn to live with it. Sometimes you loose. If I bought that car and my kids were in it, I wouldn't want that car to hesitate about saving them or not. I can accept that there is danger out there that I don't control, but making a self-conscious decision (for example buying a tool or a service) that I know may go willingly against my interest? That is absurd.

Praise of martyrs is a selling point for people who want to gain something at the expense of somebody else. Always.
 
  • #24
Let's stick with the scenario in the OP rather than a hypothetical kill A rather than kill B.

The OP said that GM's autopilot will work only on pre-planned roads, which do not include residential streets. That restricts the functionality and convenience to limit use to the safest cases. The , parents with kids living on the residential streets can be considered to be "society" in this scenario, and the owner of the car who wants full-time autopilot the individual.
 
  • Like
Likes russ_watters and DaveE
  • #25
When the autopilot car violates some law or safety, who do you blame?

Automation could be pushed too far. I believe in a car in which the living human driver is the person controlling the car's actions.
 
  • #26
symbolipoint said:
When the autopilot car violates some law or safety, who do you blame?
If the design violates the law, you blame the manufacturer.
If the way it is operated violates the law, you blame the driver.

It may help to remember that it is not a question of 100% auto versus manual. There is a continuum of partially automated possibilities. GM describes their autopilot system as enhanced cruise control. You probably already own a car with cruise control.

It may also be helpful to consider that the airline industry would not exist on such a large scale if airliners did not have so much autopilot automation. Pilots flying manually would not be able to maintain the speed, heading, and altitude restrictions required, nor depended on to stay awake and alert on long flights.

The newest gadget for small private airplanes is an auto-land button that passengers can push in case the pilot dies in-flight. Push the button and it finds an airport, communicates with controllers, and lands the plane in any weather. That takes a remarkable degree of automation.

The example in the OP interests me because it involves private decisions and free market competition, with no government interference, no compulsion.

Philosophically, we could say there will be a wide range of car autopilots offered and the free market will decide. But in this case, it is not only which car I buy, but which car the other drivers on the road have that worries me. My libertarian free-market approaches do not work well in this case.
 
  • #27
anorlunda said:
The , parents with kids living on the residential streets can be considered to be "society" in this scenario, and the owner of the car who wants full-time autopilot the individual.
The problem with one-or-the-other scenario is that you assume someone wants the other to die or get hurt. As a new car owner, do you want your car to hurt people around you? The obvious answer is no. If you choose a car with an autopilot, it is because you trust that it will do a job as well (or better) than what you would do.

Is Tesla a careless company that doesn't care for the well being and security of people (in & out of the car)? Are Tesla owners people who don't care about others? GM doesn't trust its system, Tesla trusts its. That may mean one company is safer than the other, but it also may mean that one system is more trustworthy than the other.

Until I hear about Tesla being involved in more accidents than other cars, I will not act differently around Tesla cars I'll come across on the road. But by that time - if it comes - the social system in place will have put off the road those cars (or disable the autopilot) way before I start worrying.

Do you trust Tesla to remove unsafe cars? Do you trust government officials to remove unsafe cars? Do you trust other people to not use unsafe cars? If you don't trust other people in your society, why are choosing to live with them?

And it is not because other people make decisions different from yours that it means that they are not trustworthy.
 
  • #28
Federal, state, and local governments already regulate diving behavior by means of traffic lights, speed limit signs, highway markings and not completely effective enforcement agencies. In the distant future, I foresee that all vehicles that use public roads will be required to have auto-pilot features and roads will be constructed with systems that interact with vehicles and enforce traffic laws. The future versions of "gated communities" and military bases will have private roads with systems that restrict which vehicles may drive on them.

As to the current Tesla vs GM controversy, I don't know which can claim superior morality. They both lead in the general direction of making auto-pilot computers a standard feature of vehicles by making the technology economical. Once it is reasonable to require that all vehicles on public roads have auto-pilot features, it won't be long before enforcement of traffic laws is done automatically.
 
  • #29
Tesla added a major upgrade to its system after they tested it. Would be interesting to see another test.
anorlunda said:
I'm tempted to generalize and say that if you are the car owner, you would prefer the Tesla, but if you are not the owner, you prefer the GM.
If I'm not the owner I prefer no other cars using the same street at the same time, but that's hardly a reasonable request.
Teslas have a low accident rate, with or without autopilot, and I see nothing that would point to a higher accident rate with autopilot active. Human drivers will largely stay the same while the software improves continuously.
Dr. Courtney said:
The free market should sort this out. Of course the free market includes the insurance companies who are free to set their rates based on their assessment of risks.
The free market is terrible when the life of people is concerned. It's too cheap to take risks. Maybe you need to pay a million dollars once in a while. "Oh, I'm sorry your child died. Here, have some money, that's 20% cheaper than making our software safer". This is not a hypothetical situation - just the 20% is a random number because the real margin is not public.
And that's already the best case where you can sue someone. Who are you going to sue for long-term pollution of the environment, for example? If you just hope for the free market to solve everything you get an environment that's completely toxic. Literally.

Vanadium 50 said:
Are they the same? I had an appliance with manuals in French and German. In a particular situation, the German manual explained how to fix it. The French manual said to call the repairman.
🤣
 
  • #30
mfb said:
Human drivers will largely stay the same while the software improves continuously.
I believe that many drivers dream of the day when they can have level 5 automation and do other things in the back seat while their car drives them to the destination. The latest releases from both Tesla and GM are at level 2.

Problems and bad publicity happens when drivers with a level 2 car act as if it was a higher level. I'm sure we have all seen video on the news of cars on the highway with the driver asleep. One reason that GM ranked higher than Tesla by Consumer Reports, was a more effective driver monitoring system that attempts to sense driver attention and alertness.
LEVELS OF AUTOMATIONWHO DOES WHAT, WHEN
Level 0The human driver does all the driving.
Level 1An advanced driver assistance system (ADAS) on the vehicle can sometimes assist the human driver with either steering or braking/accelerating, but not both simultaneously.
Level 2An advanced driver assistance system (ADAS) on the vehicle can itself actually control both steering and braking/accelerating simultaneously under some circumstances. The human driver must continue to pay full attention (“monitor the driving environment”) at all times and perform the rest of the driving task.
Level 3An automated driving system (ADS) on the vehicle can itself perform all aspects of the driving task under some circumstances. In those circumstances, the human driver must be ready to take back control at any time when the ADS requests the human driver to do so. In all other circumstances, the human driver performs the driving task.
Level 4An automated driving system (ADS) on the vehicle can itself perform all driving tasks and monitor the driving environment – essentially, do all the driving – in certain circumstances. The human need not pay attention in those circumstances.
Level 5An automated driving system (ADS) on the vehicle can do all the driving in all circumstances. The human occupants are just passengers and need never be involved in driving.
Source:
https://www.nhtsa.gov/technology-innovation/automated-vehicles
 
  • Like
  • Informative
Likes russ_watters, Klystron, .Scott and 1 other person
  • #31
anorlunda said:
Problems and bad publicity happens when drivers with a level 2 car act as if it was a higher level.
There in lies the reason engineers are always trying to make their stuff "idiot proof". It's not just a cute phrase.
 
  • #32
anorlunda said:
I believe that many drivers dream of the day when they can have level 5 automation and do other things in the back seat while their car drives them to the destination. The latest releases from both Tesla and GM are at level 2.
A level 2 system that's reliable enough becomes level 3, a level 3 system that can handle enough rare situations becomes level 4. Both are gradual changes in terms of actual improvement, legislation might introduce discrete steps.

Going from 4 to 5 could be very different for different companies. GM's Super Cruise system won't become one, instead they seem to try to go to level 5 directly: Reuters news. Waymo would need to get rid of the mapping requirement for roads. For Tesla the change might be the smallest - the software never relied on accurate maps and it can already handle parking lots and most (all?) road types.
 
  • Like
Likes anorlunda
  • #33
anorlunda said:
Pilots flying manually would not be able to maintain the speed, heading, and altitude restrictions required, nor depended on to stay awake and alert on long flights.
In order to get your pilot's license, you need to do a 400 mile solo triangle. It involves hours of flying and I have never heard of anyone doing it with autopilot - although it would be legal to do so.

Short of autopilot, there is trim. Once you reach your target altitude, bearing, and speed, you trim up the plane to reduce the pilot control forces. At that point, you are flying with very light "finger tip" control input. But even in situations where rough weather prevented finger tip operation, flying for hours was not a problem. It isn't like driving a car. Routine "driving" is easier but procedures (routine and exceptions) are more involved.

Altitude assignments need to be followed to within 50 feet. For a practiced pilot this is not a serious challenge.
Whether the plane is on autopilot or not, the pilot should be contuously monitoring what the plane and the environment are doing. Personally, I find staying alert easier without autopilot than with.
 
  • Like
Likes Klystron
  • #34
Mark44 said:
I'd much rather have a dumb device that does what I tell it to do, not what it "thinks" I want.
My Toyota had a "feature" that the horn would honk non-stop to warn me the battery was low. Of course it would always happen at inconvenient times. What a royal pain in the neck. You'd think it was designed by lawyers.
 
  • Like
Likes symbolipoint
  • #35
bob012345 said:
My Toyota had a "feature" that the horn would honk non-stop to warn me the battery was low.
Which, of course, would drain the battery even more quickly.
 
  • Like
Likes symbolipoint
<h2>1. What is the main ethical dilemma surrounding automation in cars?</h2><p>The main ethical dilemma surrounding automation in cars is whether the car should prioritize the safety and well-being of the individual driver or the safety and well-being of society as a whole. In other words, should the car serve the interests of the individual or the greater good?</p><h2>2. How do self-driving cars make ethical decisions?</h2><p>Self-driving cars use a combination of sensors, cameras, and algorithms to make split-second decisions while on the road. These decisions are based on pre-programmed rules and regulations, as well as real-time data about the environment and potential hazards. However, there is ongoing debate about how these decisions are made and whether they align with ethical principles.</p><h2>3. What are the potential benefits of cars serving society over the individual?</h2><p>If cars are programmed to prioritize the safety and well-being of society, it could potentially lead to a decrease in overall accidents and fatalities on the road. This could also reduce traffic congestion and improve the efficiency of transportation systems. Additionally, it could promote a more equitable distribution of resources and opportunities for all individuals.</p><h2>4. What are the potential drawbacks of cars serving society over the individual?</h2><p>One potential drawback is the loss of individual autonomy and control. If cars are programmed to make decisions for the greater good, it could limit the freedom and choices of the individual driver. There is also the possibility of unintended consequences or biases in the programming that could result in harm to certain groups of people.</p><h2>5. How can we ensure ethical considerations are taken into account in the development of automated cars?</h2><p>To ensure ethical considerations are taken into account, it is important for developers, policymakers, and the public to engage in ongoing discussions and debates about the ethical implications of automated cars. This could involve establishing ethical guidelines and regulations, conducting thorough testing and risk assessments, and involving diverse perspectives in the decision-making process. Transparency and accountability are also key in ensuring that ethical considerations are prioritized in the development and use of automated cars.</p>

1. What is the main ethical dilemma surrounding automation in cars?

The main ethical dilemma surrounding automation in cars is whether the car should prioritize the safety and well-being of the individual driver or the safety and well-being of society as a whole. In other words, should the car serve the interests of the individual or the greater good?

2. How do self-driving cars make ethical decisions?

Self-driving cars use a combination of sensors, cameras, and algorithms to make split-second decisions while on the road. These decisions are based on pre-programmed rules and regulations, as well as real-time data about the environment and potential hazards. However, there is ongoing debate about how these decisions are made and whether they align with ethical principles.

3. What are the potential benefits of cars serving society over the individual?

If cars are programmed to prioritize the safety and well-being of society, it could potentially lead to a decrease in overall accidents and fatalities on the road. This could also reduce traffic congestion and improve the efficiency of transportation systems. Additionally, it could promote a more equitable distribution of resources and opportunities for all individuals.

4. What are the potential drawbacks of cars serving society over the individual?

One potential drawback is the loss of individual autonomy and control. If cars are programmed to make decisions for the greater good, it could limit the freedom and choices of the individual driver. There is also the possibility of unintended consequences or biases in the programming that could result in harm to certain groups of people.

5. How can we ensure ethical considerations are taken into account in the development of automated cars?

To ensure ethical considerations are taken into account, it is important for developers, policymakers, and the public to engage in ongoing discussions and debates about the ethical implications of automated cars. This could involve establishing ethical guidelines and regulations, conducting thorough testing and risk assessments, and involving diverse perspectives in the decision-making process. Transparency and accountability are also key in ensuring that ethical considerations are prioritized in the development and use of automated cars.

Similar threads

  • General Engineering
Replies
22
Views
916
  • General Engineering
Replies
19
Views
10K
  • General Discussion
Replies
20
Views
3K
Replies
3
Views
4K
Replies
42
Views
6K
  • Introductory Physics Homework Help
Replies
2
Views
1K
  • General Discussion
Replies
1
Views
8K
  • General Discussion
Replies
33
Views
5K
  • General Discussion
2
Replies
42
Views
10K
  • General Discussion
2
Replies
38
Views
5K
Back
Top