Intelligent Computer Systems....

In summary, the author is discussing how Super Intelligent Computer systems could be dangerous to humanity and how we don't know what could go wrong.
  • #1
KF81
12
2
HIya..

Lets just say for arguments sake that creating Artificial General Intelligent computers is only a matter of time and then when we do that Super Intelligent Computer systems will not be far behind..

I have seen a lot of talks on-line about the ethics involved and the plausibility of creating such systems and how they could be dangerous for humanity if we are not careful and responsible.

But i have not seen anywhere explaining in detail how these systems could go wrong and how they could be dangerous to humanity.

Yes, automating weapons systems is a stupid idea, even though that possibility is being considered by some. And also, the more jobs that get automated the more people there are not working which creates an increase in unemployment.

But apart from those two examples i don't know what the big deal is..

I was wondering if there is anybody here who could explain to me some ways that creating super intelligent computer systems could be dangerous to humanity and what ways are there for it to get out of control and have a major negative effect on us..

Thanks !
 
Computer science news on Phys.org
  • #2
KF81 said:
Yes, automating weapons systems is a stupid idea ...
so you'd rather our soldiers die than our machines? I disagree.
I was wondering if there is anybody here who could explain to me some ways that creating super intelligent computer systems could be dangerous to humanity
True AI to the full extent would create an entirely different species of sentient beings. Who knows what they might think of us? They might decide we are a nuisance that needs to be eradicated.
 
Last edited:
  • Like
Likes russ_watters
  • #3
Machines aren't perfect and in the face of natures randomness may do the unexpected in times of danger.

Its hard to give an example of what might go wrong. An AGI would depend on sensor technology and trained experience to make decisions.

However, if there are a sequence of sensor fails or some cascading effect that affects the sensors in a particular sequence. It could make the AGI believe something is happening when its not.

Some examples include the Boeing 737 max software trying to counteract something a sensor reported but causing the plane to nosedive instead and not allowing the pilots to takeover and do the right thing.

Air France had a similar issue when supercooled water, an extreme event, instantly froze on the pitot tube sensors.

The computer systems started to shutdown thinking the plane was losing speed and the pilots were helpless to stop the cascading effects and lost control of the plane. See the NOVA tv show episode on the crash reconstruction.

One could argue that backup sensors or sensors that test the sensors could have fixed the problem but that only delays the issue until a bigger more unpredictable failure occurs.

Murphys law prevails everytime no matter how good our engineering skill or an AGIs intelligence is.

Another would be three mile island where the nuclear power engineers did all the wrong steps to contain the runaway reaction based on bad sensor data.
 
Last edited:
  • Informative
Likes Klystron
  • #4
phinds said:
True AI to the full extent would create an entirely different species of sentient beings. Who knows what they might think of us? They might decide we are a nuisance that needs to be eradicated.
Is that bad, or is that the foreseeable next step in evolution? We are so used to thinking of good/bad in purely anthropic terms, that we can't imagine any other viewpoint.
 
  • #5
anorlunda said:
Is that bad, or is that the foreseeable next step in evolution? We are so used to thinking of good/bad in purely anthropic terms, that we can't imagine any other viewpoint.
I think it makes no sense to say that it is "good" or "bad" without knowing what the outcome will be. It could be fantastically wonderful or catastrophically awful. For humanity, I mean.
 
  • #6
We have automobiles now that are "intelligent" enough that we don't have to pay full attention to driving. They sometimes run into tractor-trailer trucks at a speed high enought to decapitate us... and we aren't even at war with them! (as far as I know)
 
  • Like
Likes jedishrfu
  • #7
KF81 said:
But i have not seen anywhere explaining in detail how these systems could go wrong and how they could be dangerous to humanity.
Read Isaac Asimov
 
  • Like
Likes Klystron, jedishrfu and phinds
  • #8
Tom.G said:
We have automobiles now that are "intelligent" enough that we don't have to pay full attention to driving. They sometimes run into tractor-trailer trucks at a speed high enought to decapitate us... and we aren't even at war with them! (as far as I know)
Which really has nothing to do with the topic of this thread which is Super Intelligent Computer systems. Comparing "Intelligent" cars to Super Intelligent Computer systems is like comparing an ant to Einstein.
 
  • Like
Likes jedishrfu
  • #9
jedishrfu said:
One could argue that backup sensors or sensors that test the sensors could have fixed the problem but that only delays it until a bigger failure occurs.
Why would you expect a "bigger" failure? Redundant sensors and systems can greatly reduce the error rate without causing a bigger failure. Automated systems are the only path of continuous improvement in tasks where human errors would otherwise limit the improvement.
 
  • #10
KF81 said:
Yes, automating weapons systems is a stupid idea, even though that possibility is being considered by some.
I would not be at all surprised if there were many more people working on this than on anything else, including autonomous cars.
And also, the more jobs that get automated the more people there are not working which creates an increase in unemployment.

But apart from those two examples i don't know what the big deal is.
Those are important examples to ignore. Are you really asking about the possibility of machine hostility toward humans?
I was wondering if there is anybody here who could explain to me some ways that creating super intelligent computer systems could be dangerous to humanity and what ways are there for it to get out of control and have a major negative effect on us.
One use of quantum computers that is often mentioned is their expected ability to break any code or password protection. That ability could cut both ways.
 
  • #11
FactChecker said:
Why would you expect a "bigger" failure? Redundant sensors and systems can greatly reduce the error rate without causing a bigger failure. Automated systems are the only path of continuous improvement in tasks where human errors would otherwise limit the improvement.

Yes, it reduces the frequency of occurrence but sometimes a more extreme event can override the original sensors and the backups to create a sensor overload that confuses the human who makes a faulty decision and more catastrophic results.

There was a recent article that argued our penchant for safety might make things more dangerous. They cited safety helmets which made people believe they could do something more life threatening because they were protected and thus getting killed instead of getting seriously injured.
 
  • Like
Likes anorlunda and FactChecker
  • #12
jedishrfu said:
There was a recent article that argued our penchant for safety might make things more dangerous. They cited safety helmets which made people believe they could do something more life threatening because they were protected and thus getting killed instead of getting seriously injured.
That's a good point. I guess one should say that the improved systems offers the opportunity for increased safety. Perhaps humans cannot be saved from themselves. But it still means that a person who wants to be safe can take advantage of the improved systems. This reminds me of the comparison between a cruiser motorcycle and a racing motorcycle. Although there are a lot of crazy people who die on racing motorcycles, they are much safer than a cruiser when ridden like a cruiser. They handle turns and breaking much better.
 
  • Like
Likes jedishrfu
  • #13
I think everyone is ignoring my statement in post #2
True AI to the full extent would create an entirely different species of sentient beings. Who knows what they might think of us? They might decide we are a nuisance that needs to be eradicated.
You're all talking about faulty sensors and such, which to me have absolutely nothing to do with the ultimate issue.

Personally, I do NOT think that another species of high intelligence would be hostile to us, BUT ... I think everyone is forgetting that humans had many hundreds of thousands of years of group / clan / societal association and AI would likely have none of that, so who knows what would happen.

It just seems to me that everyone is trivializing the problem based on modern robots and sensor, not on the actual issue of another sentient species. Of course, I could be wrong that even incredibly intelligent computers would become sentient, but I don't see how.
 
  • #14
phinds said:
Personally, I do NOT think that another species of high intelligence would be hostile to us, BUT ... I think everyone is forgetting that humans had many hundreds of thousands of years of group / clan / societal association and AI would likely have none of that, so who knows what would happen.

It just seems to me that everyone is trivializing the problem based on modern robots and sensor, not on the actual issue of another sentient species.
I think that you have a good point and are more in tune with what the OP had in mind. But I can not say much that would be measurably different from science fiction. I do believe that in the near term, there is a lot of AI being worked on in military weapon systems that can kill people. I also think that the neural network systems that are free to develop their own patterns, rules, and categorical systems might turn those weapon systems hostile. An AI system / network which identifies friend or foe could conceivably go rogue.
 
  • Like
Likes jedishrfu
  • #15
When i think about the student misidentified by the Apple face recognition to the point where they kicked him out of a store and then someone using a similar system to identify wanted dead or alive fugitives it gives me pause and some shivers too.
 
  • #16
jedishrfu said:
When i think about the student misidentified by the Apple face recognition to the point where they kicked him out of a store and then someone using a similar system to identify wanted dead or alive fugitives it gives me pause and some shivers too.
I think you are making comments based on current technology and sensors that have nothing to do with a new sentient species of super intelligent AI, which is what the OP was asking about.

By the time we get to such as situation (if we ever do, and I think we will) sensors will be better and I have no doubt at all that AI face recognition will be much better than human face recognition.

I'm not suggesting that super intelligent AI will never make mistakes, just that it is likely to be better at pretty much everything than we humans are.
 
  • #17
Yes, but history repeats itself and your argument has been used forever to rationalize that things will be better. They are different, they are better and the gotchas get more hidden but they are still there, will always be there and someone will suffer when it fails.
 
  • #18
jedishrfu said:
Yes, but history repeats itself and your argument has been used forever to rationalize that things will be better. They are different, they are better and the gotchas get more hidden but they are still there, will always be there and someone will suffer when it fails.
Yes, and again, I'm not saying that super intelligent AI won't make mistakes, just that they will make fewer than humans.

My understand is that even today's smart cars are very close to being safer than human drivers and are already there in most normal driving situations.
 
  • #19
Yes, they safer for passengers. We don't know about pedestrians though.
 
  • #20
jedishrfu said:
Yes, they safer for passengers. We don't know about pedestrians though.
My understanding is that we DO know and the results are good, just not good enough yet to totally trust them on the road because the insurance liabilty issue is such a big stumbling block.
 
  • #21
@phinds I’m nor disagreeing with you. I realize that you have hit all the right points. I just want to make clear that technology will always have adverse effects that we can’t anticipate whether a super general intelligence is behind it or not.

In some sense, our hope is that Asimov laws of robotics will prevail even though we know inherently that it will be devilishly hard to craft a system that can actually follow them as infallibly as Asimov envisioned. Even with that, many of his stories evolved around law circumventions that could not be anticipated until they happened. I guess that’s why I loved his stories so much.
 
  • #22
jedishrfu said:
@phinds I’m nor disagreeing with you. I realize that you have hit all the right points. I just want to make clear that technology will always have adverse effects that we can’t anticipate whether a super general intelligence is behind it or not.
Agreed.

In some sense, our hope is that Asimov laws of robotics will prevail even though we know inherently that it will be devilishly hard to craft a system that can actually follow them as infallibly as Asimov envisioned. Even with that, many of his stories evolved around law circumventions that could not be anticipated until they happened. I guess that’s why I loved his stories so much.
If we do in fact create what amounts to another sentient race, as I envision, I see no reason why the Three Laws, or anything like them, should be something they care about at all. That would be rather like expecting Homo Sapiens to follow the grooming traditions of monkeys.
 
  • Like
Likes jedishrfu
  • #24
Well this Hot Thread seems to have cooled off . Maybe we can warm it up a bit.

First to @KF81. Get a copy (public library?) of " Life 3.0" by Max Tegmark for examples of AGI going rogue.

Human endeavor is fraught with unintended consequences. Famous last words "It seemed like a good idea at the time." I think we are getting weak but ominous messages from our current implementations of AI. Some seem to develop capabilities that were not anticipated. Some do things that we cannot fathom. Even with non AI systems or activities we often find well intentioned endeavors go awry with unforeseen complications.

We believe that we can control AI because we "designed" it. I think this is an illusion due to our arrogance. The problem with AI is that it could well be the last human accomplishment. Let's look at an analogous example. Suppose we could bioengineer a bacteria that produces a cure for cancer. This bacteria also turns out to be a "super bug" with extraordinary characteristic as long incubation period, extreme infectability, long life away from host, near 100% mortality. First one would want to guarantee its containment. Considering that mans fallibility one might first think about whether or not to proceed with this project even though it would save millions of lives but at the risk of billions of deaths. The difference between this and AI is that you know the problems with pathogens. We only guess the problems that might occur with AI.

AI systems designed for well defined tasks should not be a problem except perhaps validating the results from the system. But AGI is another matter. Our first task in developing AGI is to assure that it is contained and has no access to a network let alone the internet. Any information the system needs and only the information it needs must be provided by media as SSDs but no hookup to another computer system. The SSD then must be scrubbed before reuse. All interactions with human is through audio/visual routes or means that do not need another computer to process. The AGI system does not have control of any external system nor access to information that we do not wish it to have. We certainly do not wish it to be like us.

The problem I see is that there is such a fascination with AGI that proper precautions will not be taken in the race to be the first to demonstrate its existence, Another problem is it might be developed in secret to conspire with the system for purposes of power or wealth (Life 3.0).

As any machine AI does not have to be sentient in the human sense to the dangerous.
 
  • #25
A good analogy concerning the out of control issue is the proliferation of malware today. Even if you shutdown a given network of machines and refresh their hard drives with clean software, they will soon get infected by some form of malware.

At one company I worked at, they were attacked by NIMDA and did all the proper cleaning. However, being a large network some machine somewhere wasn't cleaned and it would reinfect new machines recently added to the network even as their OS was being installed (network install). It was quite distressing to have your virus scanner report that NIMDA was once again on the machine you just finished cleaning up.

Imagine an intelligent AI that can persist on the internet as a whole, how would we ever be able to effectively remove it without having every machine being shutdown, wiped drive, refreshed OS, restored software and rebooted.
 
  • Like
Likes Klystron and phinds
  • #26
It is impossible to discuss because almost all our concepts of better/worse good/bad are biased toward human self interest. It might be objectively "better" for the world to not have humans around. The scary thing is having any non-human entity (AI, alien, or whatever) have its own viewpoint and its own freedom of action.

It reminds me of the frequent beginner's problem in physics where the answer depends on where you draw the boundary lines. In this case the question is should the system include only humans or not?

I find it easy to imagine AI as the next possible step in evolution. The landmark step that breaks the bonds with biology.

To be clear, I do not favor an AI takeover. I merely recognize that I am not a disinterested neutral observer.
 
  • Like
Likes jedishrfu
  • #27
@jedishrfu 's warning posts describe problem scenarios with artificial intelligence that do not require machine consciousness, self-awareness or even independent decision making abilities.

Take the integrated radar / communication traffic control systems prior to GPS. As systems improved, more aircraft with more information devices were added to the traffic control picture; sometimes to the point of operator information overload. A simple blip on a moving target indicator began to include more and more detailed information including identification, call signs and radio frequencies, height above ground and other data from internal aircraft sensors, even company coded messages and alerts.

Each technological step forward to improve traffic control included more information handling -- no matter how useful and intelligent -- to the initial safety mission.
 
  • Like
Likes jedishrfu
  • #28
Great example of AI gone wrong. Read/view "The Corbin Project".
As for weaponed robots, how do you decide who has won a conflict if both sides have similar(tech. will match after all) robots blazing away at each other while humans watch. Is it last robot standing or who has more manufacturing capacity/resources. What happens if robots are smart enough to realize they are only destroying each other and refuse to carry on to destroy each other. Robot strike?
Klystron, yes we have now made great leaps in aircraft tech etc, but have you ever watched Aircraft Crash Investigation. You may be interested to see how many times it has caused aircraft crashes. Pilots no longer seem to be able "fly " their planes and are lost when the tech goes awry. Boeing still has planes grounded due to faulty software.
Latest tech may be great while it works but deadly when it fails because few can fix it easily. Own a modern car?
How many can service one themselves?
 
  • #30
The problem I have with the Forbin story is that the computer has a malevolence to it which I don't feel AI systems will have when they become that powerful. Instead, I think they will just do things confidently wrong when its outside of their training set but they are required to act.

Self-driving cars exhibit this issue today in not always seeing slow-moving vehicles or pedestrians that do unexpected things and accidentally hitting them. This could be a sensor issue or a processing issue or some odd geometry time of day issue where a sensor is temporarily blinded who knows what...
 
  • #31
Ah yes, my forgetfullness. It is indeed the "Forbin project" Been many years and much scfi since then. Not sure if the computer/AI in that was actually malevolent or decided humans weren't to be trusted to control themselves with atomics.
I did spend some time in thought experiments in how to overcome the machine and if it was smart enough to insist on having suitable sensors installed everywhere it might be vulnerable( power supply, missile silos etc) it could well nigh be unstoppable. Its only vulnerability was that if it did decide to launch all missiles it would not only destroy the Earth but itself as well( no more power or humans to do its bidding).
A combined effort to disable all the missiles at the same time may work even if some launched(what would he russian computer do if they were working as one?). Could it design and have built self-replicating robots and do away with human servants altogether? A great future to look forward to!
 
  • #32
KF81 said:
I was wondering if there is anybody here who could explain to me some ways that creating super intelligent computer systems could be dangerous to humanity and what ways are there for it to get out of control and have a major negative effect on us..
The most realistic danger I can imagine is that by the time it really learns to think humanity (most of it) will completely forget thinking.
 
  • #33
Rive said:
r I can imagine is that by the time it really learns to think humanity (most of it) will completely forget thinking.

Too late. Already happened.
 
  • Like
  • Haha
Likes Rive, Klystron, gleem and 1 other person
  • #34
AI implementation is proceeding at a fast rate for specific tasks for which it is aptly suited with little untoward risk to humankind. Especially useful for analyzing large data sets or fast evolving data. AI can give a heads up on potential serious situations that are evolving.

However governments who will implement AI for military purposes will be hard pressed to contain the capabilities of these systems. Adversaries will continually try to develop AI to counter their opponents; escalating the capabilities of these systems. Of particular importance is the use to produce strategies for conflict since the AI system must "understand" human behavior as well as his countries strengths and weaknesses which I believe you do not want AI systems to know. It will learn the mind games that humans play.

Ai will become the atomic of cyberwarfare. Have any of you thought of what would happen if our power grid were taken down for an extended period of time? Even a week ? Government/Military applications may very well be the gateway to the domination of humankind by AI. China is currently trying to us AI to monitor its population to award social credits for those who behave. The applications necessarily need a network which will be a playground for AI.
 
  • #35
We can imagine all sorts of apocalyptic scenarios, but what is the point of that?

This thread is inherently speculative, but I fear that we might be going off the deep end in a technical forum. This forum is not for general discussion.
 
<h2>1. What are intelligent computer systems?</h2><p>Intelligent computer systems are computer programs or machines that are designed to perform tasks that typically require human intelligence, such as learning, problem-solving, and decision-making.</p><h2>2. How do intelligent computer systems work?</h2><p>Intelligent computer systems use algorithms and machine learning techniques to process and analyze data, make decisions, and improve their performance over time. They also often incorporate natural language processing and computer vision capabilities.</p><h2>3. What are the potential applications of intelligent computer systems?</h2><p>Intelligent computer systems have a wide range of potential applications, including in fields such as healthcare, finance, transportation, and manufacturing. They can be used for tasks such as data analysis, predictive modeling, and automation of routine processes.</p><h2>4. What are the benefits of using intelligent computer systems?</h2><p>The use of intelligent computer systems can lead to increased efficiency, accuracy, and speed in completing tasks. They can also handle large amounts of data and make complex decisions, freeing up human workers to focus on more creative and strategic tasks.</p><h2>5. Are there any concerns about the use of intelligent computer systems?</h2><p>Some concerns about the use of intelligent computer systems include potential biases in the data they are trained on, the possibility of job displacement for human workers, and ethical considerations surrounding their decision-making processes. It is important for developers to address these concerns and ensure responsible and ethical use of these systems.</p>

Related to Intelligent Computer Systems....

1. What are intelligent computer systems?

Intelligent computer systems are computer programs or machines that are designed to perform tasks that typically require human intelligence, such as learning, problem-solving, and decision-making.

2. How do intelligent computer systems work?

Intelligent computer systems use algorithms and machine learning techniques to process and analyze data, make decisions, and improve their performance over time. They also often incorporate natural language processing and computer vision capabilities.

3. What are the potential applications of intelligent computer systems?

Intelligent computer systems have a wide range of potential applications, including in fields such as healthcare, finance, transportation, and manufacturing. They can be used for tasks such as data analysis, predictive modeling, and automation of routine processes.

4. What are the benefits of using intelligent computer systems?

The use of intelligent computer systems can lead to increased efficiency, accuracy, and speed in completing tasks. They can also handle large amounts of data and make complex decisions, freeing up human workers to focus on more creative and strategic tasks.

5. Are there any concerns about the use of intelligent computer systems?

Some concerns about the use of intelligent computer systems include potential biases in the data they are trained on, the possibility of job displacement for human workers, and ethical considerations surrounding their decision-making processes. It is important for developers to address these concerns and ensure responsible and ethical use of these systems.

Similar threads

  • Computing and Technology
Replies
4
Views
967
Replies
3
Views
3K
  • Computing and Technology
3
Replies
99
Views
5K
  • Computing and Technology
Replies
1
Views
466
  • Computing and Technology
Replies
12
Views
3K
Replies
26
Views
2K
Replies
46
Views
6K
Replies
9
Views
1K
Replies
10
Views
2K
  • Computing and Technology
Replies
2
Views
1K
Back
Top