Intelligent Computer Systems...

  • Thread starter KF81
  • Start date
12
2
HIya..

Lets just say for arguments sake that creating Artificial General Intelligent computers is only a matter of time and then when we do that Super Intelligent Computer systems will not be far behind..

I have seen a lot of talks on-line about the ethics involved and the plausibility of creating such systems and how they could be dangerous for humanity if we are not careful and responsible.

But i have not seen anywhere explaining in detail how these systems could go wrong and how they could be dangerous to humanity.

Yes, automating weapons systems is a stupid idea, even though that possibility is being considered by some. And also, the more jobs that get automated the more people there are not working which creates an increase in unemployment.

But apart from those two examples i don't know what the big deal is..

I was wondering if there is anybody here who could explain to me some ways that creating super intelligent computer systems could be dangerous to humanity and what ways are there for it to get out of control and have a major negative effect on us..

Thanks !
 
15,181
4,842
Yes, automating weapons systems is a stupid idea ...
so you'd rather our soldiers die than our machines? I disagree.
I was wondering if there is anybody here who could explain to me some ways that creating super intelligent computer systems could be dangerous to humanity
True AI to the full extent would create an entirely different species of sentient beings. Who knows what they might think of us? They might decide we are a nuisance that needs to be eradicated.
 
Last edited:
10,706
4,269
Machines aren't perfect and in the face of natures randomness may do the unexpected in times of danger.

Its hard to give an example of what might go wrong. An AGI would depend on sensor technology and trained experience to make decisions.

However, if there are a sequence of sensor fails or some cascading effect that affects the sensors in a particular sequence. It could make the AGI believe something is happening when its not.

Some examples include the Boeing 737 max software trying to counteract something a sensor reported but causing the plane to nosedive instead and not allowing the pilots to takeover and do the right thing.

Air France had a similar issue when supercooled water, an extreme event, instantly froze on the pitot tube sensors.

The computer systems started to shutdown thinking the plane was losing speed and the pilots were helpless to stop the cascading effects and lost control of the plane. See the NOVA tv show episode on the crash reconstruction.

One could argue that backup sensors or sensors that test the sensors could have fixed the problem but that only delays the issue until a bigger more unpredictable failure occurs.

Murphys law prevails everytime no matter how good our engineering skill or an AGIs intelligence is.

Another would be three mile island where the nuclear power engineers did all the wrong steps to contain the runaway reaction based on bad sensor data.
 
Last edited:

anorlunda

Mentor
Insights Author
Gold Member
7,146
3,930
True AI to the full extent would create an entirely different species of sentient beings. Who knows what they might think of us? They might decide we are a nuisance that needs to be eradicated.
Is that bad, or is that the foreseeable next step in evolution? We are so used to thinking of good/bad in purely anthropic terms, that we can't imagine any other viewpoint.
 
15,181
4,842
Is that bad, or is that the foreseeable next step in evolution? We are so used to thinking of good/bad in purely anthropic terms, that we can't imagine any other viewpoint.
I think it makes no sense to say that it is "good" or "bad" without knowing what the outcome will be. It could be fantastically wonderful or catastrophically awful. For humanity, I mean.
 

Tom.G

Science Advisor
2,677
1,498
We have automobiles now that are "intelligent" enough that we don't have to pay full attention to driving. They sometimes run into tractor-trailer trucks at a speed high enought to decapitate us... and we aren't even at war with them! (as far as I know)
 

BvU

Science Advisor
Homework Helper
12,097
2,668
But i have not seen anywhere explaining in detail how these systems could go wrong and how they could be dangerous to humanity.
Read Isaac Asimov
 
15,181
4,842
We have automobiles now that are "intelligent" enough that we don't have to pay full attention to driving. They sometimes run into tractor-trailer trucks at a speed high enought to decapitate us... and we aren't even at war with them! (as far as I know)
Which really has nothing to do with the topic of this thread which is Super Intelligent Computer systems. Comparing "Intelligent" cars to Super Intelligent Computer systems is like comparing an ant to Einstein.
 

FactChecker

Science Advisor
Gold Member
2018 Award
4,990
1,754
One could argue that backup sensors or sensors that test the sensors could have fixed the problem but that only delays it until a bigger failure occurs.
Why would you expect a "bigger" failure? Redundant sensors and systems can greatly reduce the error rate without causing a bigger failure. Automated systems are the only path of continuous improvement in tasks where human errors would otherwise limit the improvement.
 

FactChecker

Science Advisor
Gold Member
2018 Award
4,990
1,754
Yes, automating weapons systems is a stupid idea, even though that possibility is being considered by some.
I would not be at all surprised if there were many more people working on this than on anything else, including autonomous cars.
And also, the more jobs that get automated the more people there are not working which creates an increase in unemployment.

But apart from those two examples i don't know what the big deal is.
Those are important examples to ignore. Are you really asking about the possibility of machine hostility toward humans?
I was wondering if there is anybody here who could explain to me some ways that creating super intelligent computer systems could be dangerous to humanity and what ways are there for it to get out of control and have a major negative effect on us.
One use of quantum computers that is often mentioned is their expected ability to break any code or password protection. That ability could cut both ways.
 
10,706
4,269
Why would you expect a "bigger" failure? Redundant sensors and systems can greatly reduce the error rate without causing a bigger failure. Automated systems are the only path of continuous improvement in tasks where human errors would otherwise limit the improvement.
Yes, it reduces the frequency of occurrence but sometimes a more extreme event can override the original sensors and the backups to create a sensor overload that confuses the human who makes a faulty decision and more catastrophic results.

There was a recent article that argued our penchant for safety might make things more dangerous. They cited safety helmets which made people believe they could do something more life threatening because they were protected and thus getting killed instead of getting seriously injured.
 

FactChecker

Science Advisor
Gold Member
2018 Award
4,990
1,754
There was a recent article that argued our penchant for safety might make things more dangerous. They cited safety helmets which made people believe they could do something more life threatening because they were protected and thus getting killed instead of getting seriously injured.
That's a good point. I guess one should say that the improved systems offers the opportunity for increased safety. Perhaps humans cannot be saved from themselves. But it still means that a person who wants to be safe can take advantage of the improved systems. This reminds me of the comparison between a cruiser motorcycle and a racing motorcycle. Although there are a lot of crazy people who die on racing motorcycles, they are much safer than a cruiser when ridden like a cruiser. They handle turns and breaking much better.
 
15,181
4,842
I think everyone is ignoring my statement in post #2
True AI to the full extent would create an entirely different species of sentient beings. Who knows what they might think of us? They might decide we are a nuisance that needs to be eradicated.
You're all talking about faulty sensors and such, which to me have absolutely nothing to do with the ultimate issue.

Personally, I do NOT think that another species of high intelligence would be hostile to us, BUT ... I think everyone is forgetting that humans had many hundreds of thousands of years of group / clan / societal association and AI would likely have none of that, so who knows what would happen.

It just seems to me that everyone is trivializing the problem based on modern robots and sensor, not on the actual issue of another sentient species. Of course, I could be wrong that even incredibly intelligent computers would become sentient, but I don't see how.
 

FactChecker

Science Advisor
Gold Member
2018 Award
4,990
1,754
Personally, I do NOT think that another species of high intelligence would be hostile to us, BUT ... I think everyone is forgetting that humans had many hundreds of thousands of years of group / clan / societal association and AI would likely have none of that, so who knows what would happen.

It just seems to me that everyone is trivializing the problem based on modern robots and sensor, not on the actual issue of another sentient species.
I think that you have a good point and are more in tune with what the OP had in mind. But I can not say much that would be measurably different from science fiction. I do believe that in the near term, there is a lot of AI being worked on in military weapon systems that can kill people. I also think that the neural network systems that are free to develop their own patterns, rules, and categorical systems might turn those weapon systems hostile. An AI system / network which identifies friend or foe could conceivably go rogue.
 
10,706
4,269
When i think about the student misidentified by the Apple face recognition to the point where they kicked him out of a store and then someone using a similar system to identify wanted dead or alive fugitives it gives me pause and some shivers too.
 
15,181
4,842
When i think about the student misidentified by the Apple face recognition to the point where they kicked him out of a store and then someone using a similar system to identify wanted dead or alive fugitives it gives me pause and some shivers too.
I think you are making comments based on current technology and sensors that have nothing to do with a new sentient species of super intelligent AI, which is what the OP was asking about.

By the time we get to such as situation (if we ever do, and I think we will) sensors will be better and I have no doubt at all that AI face recognition will be much better than human face recognition.

I'm not suggesting that super intelligent AI will never make mistakes, just that it is likely to be better at pretty much everything than we humans are.
 
10,706
4,269
Yes, but history repeats itself and your argument has been used forever to rationalize that things will be better. They are different, they are better and the gotchas get more hidden but they are still there, will always be there and someone will suffer when it fails.
 
15,181
4,842
Yes, but history repeats itself and your argument has been used forever to rationalize that things will be better. They are different, they are better and the gotchas get more hidden but they are still there, will always be there and someone will suffer when it fails.
Yes, and again, I'm not saying that super intelligent AI won't make mistakes, just that they will make fewer than humans.

My understand is that even today's smart cars are very close to being safer than human drivers and are already there in most normal driving situations.
 
10,706
4,269
Yes, they safer for passengers. We dont know about pedestrians though.
 
15,181
4,842
Yes, they safer for passengers. We dont know about pedestrians though.
My understanding is that we DO know and the results are good, just not good enough yet to totally trust them on the road because the insurance liabilty issue is such a big stumbling block.
 
10,706
4,269
@phinds I’m nor disagreeing with you. I realize that you have hit all the right points. I just want to make clear that technology will always have adverse effects that we can’t anticipate whether a super general intelligence is behind it or not.

In some sense, our hope is that Asimov laws of robotics will prevail even though we know inherently that it will be devilishly hard to craft a system that can actually follow them as infallibly as Asimov envisioned. Even with that, many of his stories evolved around law circumventions that could not be anticipated until they happened. I guess that’s why I loved his stories so much.
 
15,181
4,842
@phinds I’m nor disagreeing with you. I realize that you have hit all the right points. I just want to make clear that technology will always have adverse effects that we can’t anticipate whether a super general intelligence is behind it or not.
Agreed.

In some sense, our hope is that Asimov laws of robotics will prevail even though we know inherently that it will be devilishly hard to craft a system that can actually follow them as infallibly as Asimov envisioned. Even with that, many of his stories evolved around law circumventions that could not be anticipated until they happened. I guess that’s why I loved his stories so much.
If we do in fact create what amounts to another sentient race, as I envision, I see no reason why the Three Laws, or anything like them, should be something they care about at all. That would be rather like expecting Homo Sapiens to follow the grooming traditions of monkeys.
 
10,706
4,269
I guess I’ll close with this article from Gizmodo about why we won’t be saved by Asimov Three Laws

 

gleem

Science Advisor
1,339
773
Well this Hot Thread seems to have cooled off . Maybe we can warm it up a bit.

First to @KF81. Get a copy (public library?) of " Life 3.0" by Max Tegmark for examples of AGI going rogue.

Human endeavor is fraught with unintended consequences. Famous last words "It seemed like a good idea at the time." I think we are getting weak but ominous messages from our current implementations of AI. Some seem to develop capabilities that were not anticipated. Some do things that we cannot fathom. Even with non AI systems or activities we often find well intentioned endeavors go awry with unforeseen complications.

We believe that we can control AI because we "designed" it. I think this is an illusion due to our arrogance. The problem with AI is that it could well be the last human accomplishment. Lets look at an analogous example. Suppose we could bioengineer a bacteria that produces a cure for cancer. This bacteria also turns out to be a "super bug" with extraordinary characteristic as long incubation period, extreme infectability, long life away from host, near 100% mortality. First one would want to guarantee its containment. Considering that mans fallibility one might first think about whether or not to proceed with this project even though it would save millions of lives but at the risk of billions of deaths. The difference between this and AI is that you know the problems with pathogens. We only guess the problems that might occur with AI.

AI systems designed for well defined tasks should not be a problem except perhaps validating the results from the system. But AGI is another matter. Our first task in developing AGI is to assure that it is contained and has no access to a network let alone the internet. Any information the system needs and only the information it needs must be provided by media as SSDs but no hookup to another computer system. The SSD then must be scrubbed before reuse. All interactions with human is through audio/visual routes or means that do not need another computer to process. The AGI system does not have control of any external system nor access to information that we do not wish it to have. We certainly do not wish it to be like us.

The problem I see is that there is such a fascination with AGI that proper precautions will not be taken in the race to be the first to demonstrate its existence, Another problem is it might be developed in secret to conspire with the system for purposes of power or wealth (Life 3.0).

As any machine AI does not have to be sentient in the human sense to the dangerous.
 
10,706
4,269
A good analogy concerning the out of control issue is the proliferation of malware today. Even if you shutdown a given network of machines and refresh their hard drives with clean software, they will soon get infected by some form of malware.

At one company I worked at, they were attacked by NIMDA and did all the proper cleaning. However, being a large network some machine somewhere wasn't cleaned and it would reinfect new machines recently added to the network even as their OS was being installed (network install). It was quite distressing to have your virus scanner report that NIMDA was once again on the machine you just finished cleaning up.

Imagine an intelligent AI that can persist on the internet as a whole, how would we ever be able to effectively remove it without having every machine being shutdown, wiped drive, refreshed OS, restored software and rebooted.
 

Related Threads for: Intelligent Computer Systems...

Replies
8
Views
2K
Replies
5
Views
2K
Replies
12
Views
629
  • Posted
Replies
4
Views
4K

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
Top