Intelligent Computer Systems....

  • Thread starter Thread starter KF81
  • Start date Start date
  • Tags Tags
    Computer Systems
Click For Summary
The discussion centers on the potential dangers of creating super intelligent computer systems, particularly Artificial General Intelligence (AGI). Concerns include the possibility of AGI developing hostile attitudes towards humanity, as it may view humans as a nuisance or threat. Examples of failures in automated systems, such as the Boeing 737 Max and Three Mile Island incidents, highlight how sensor errors can lead to catastrophic outcomes. Participants also debate the implications of AI in military applications and the unpredictability of a new sentient species. Ultimately, the conversation emphasizes the need for caution and ethical considerations as technology advances.
  • #31
Ah yes, my forgetfullness. It is indeed the "Forbin project" Been many years and much scfi since then. Not sure if the computer/AI in that was actually malevolent or decided humans weren't to be trusted to control themselves with atomics.
I did spend some time in thought experiments in how to overcome the machine and if it was smart enough to insist on having suitable sensors installed everywhere it might be vulnerable( power supply, missile silos etc) it could well nigh be unstoppable. Its only vulnerability was that if it did decide to launch all missiles it would not only destroy the Earth but itself as well( no more power or humans to do its bidding).
A combined effort to disable all the missiles at the same time may work even if some launched(what would he russian computer do if they were working as one?). Could it design and have built self-replicating robots and do away with human servants altogether? A great future to look forward to!
 
Computer science news on Phys.org
  • #32
KF81 said:
I was wondering if there is anybody here who could explain to me some ways that creating super intelligent computer systems could be dangerous to humanity and what ways are there for it to get out of control and have a major negative effect on us..
The most realistic danger I can imagine is that by the time it really learns to think humanity (most of it) will completely forget thinking.
 
  • #33
Rive said:
r I can imagine is that by the time it really learns to think humanity (most of it) will completely forget thinking.

Too late. Already happened.
 
  • Like
  • Haha
Likes Rive, Klystron, gleem and 1 other person
  • #34
AI implementation is proceeding at a fast rate for specific tasks for which it is aptly suited with little untoward risk to humankind. Especially useful for analyzing large data sets or fast evolving data. AI can give a heads up on potential serious situations that are evolving.

However governments who will implement AI for military purposes will be hard pressed to contain the capabilities of these systems. Adversaries will continually try to develop AI to counter their opponents; escalating the capabilities of these systems. Of particular importance is the use to produce strategies for conflict since the AI system must "understand" human behavior as well as his countries strengths and weaknesses which I believe you do not want AI systems to know. It will learn the mind games that humans play.

Ai will become the atomic of cyberwarfare. Have any of you thought of what would happen if our power grid were taken down for an extended period of time? Even a week ? Government/Military applications may very well be the gateway to the domination of humankind by AI. China is currently trying to us AI to monitor its population to award social credits for those who behave. The applications necessarily need a network which will be a playground for AI.
 
  • #35
We can imagine all sorts of apocalyptic scenarios, but what is the point of that?

This thread is inherently speculative, but I fear that we might be going off the deep end in a technical forum. This forum is not for general discussion.
 
  • #36
Good time to close the thread. Thank you all for participating.

Collosus/Guardian signing off.
 
  • Like
Likes Klystron

Similar threads

  • · Replies 37 ·
2
Replies
37
Views
2K
Replies
3
Views
3K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 12 ·
Replies
12
Views
4K
  • · Replies 99 ·
4
Replies
99
Views
7K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
684
Replies
46
Views
8K
  • · Replies 26 ·
Replies
26
Views
2K
Replies
9
Views
2K