Intelligent Computer Systems....

  • Thread starter Thread starter KF81
  • Start date Start date
  • Tags Tags
    Computer Systems
Click For Summary

Discussion Overview

The discussion revolves around the potential dangers and ethical considerations of creating Artificial General Intelligence (AGI) and Super Intelligent Computer systems. Participants explore various scenarios where these systems could pose risks to humanity, including automation of weapons, job displacement, and unforeseen failures in technology.

Discussion Character

  • Debate/contested
  • Exploratory
  • Technical explanation

Main Points Raised

  • Some participants express concern about the ethical implications of automating weapons systems, suggesting that it could lead to significant dangers for humanity.
  • There are arguments that true AI could evolve into a new species that might view humans as a nuisance, potentially leading to hostile actions against humanity.
  • Participants discuss the unpredictability of machines, particularly in critical situations where sensor failures could lead to catastrophic outcomes, citing examples like the Boeing 737 Max and the Three Mile Island incident.
  • Some argue that while redundant systems can reduce error rates, they may not prevent more extreme failures that could confuse human operators and lead to worse outcomes.
  • There is a viewpoint that the perception of safety from advanced systems might encourage riskier behavior, potentially leading to more dangerous situations.
  • Concerns are raised about the implications of quantum computing, particularly its ability to break encryption, which could have both positive and negative consequences.

Areas of Agreement / Disagreement

Participants express a range of views, with no clear consensus on the dangers posed by super intelligent systems. While some agree on the potential risks of automation and AI, others challenge the assumptions and highlight the complexity of the issues involved.

Contextual Notes

Participants highlight various limitations in their arguments, including the unpredictability of technology, the dependence on sensor reliability, and the ambiguity of outcomes related to advanced AI systems.

  • #31
Ah yes, my forgetfullness. It is indeed the "Forbin project" Been many years and much scfi since then. Not sure if the computer/AI in that was actually malevolent or decided humans weren't to be trusted to control themselves with atomics.
I did spend some time in thought experiments in how to overcome the machine and if it was smart enough to insist on having suitable sensors installed everywhere it might be vulnerable( power supply, missile silos etc) it could well nigh be unstoppable. Its only vulnerability was that if it did decide to launch all missiles it would not only destroy the Earth but itself as well( no more power or humans to do its bidding).
A combined effort to disable all the missiles at the same time may work even if some launched(what would he russian computer do if they were working as one?). Could it design and have built self-replicating robots and do away with human servants altogether? A great future to look forward to!
 
Computer science news on Phys.org
  • #32
KF81 said:
I was wondering if there is anybody here who could explain to me some ways that creating super intelligent computer systems could be dangerous to humanity and what ways are there for it to get out of control and have a major negative effect on us..
The most realistic danger I can imagine is that by the time it really learns to think humanity (most of it) will completely forget thinking.
 
  • #33
Rive said:
r I can imagine is that by the time it really learns to think humanity (most of it) will completely forget thinking.

Too late. Already happened.
 
  • Like
  • Haha
Likes   Reactions: Rive, Klystron, gleem and 1 other person
  • #34
AI implementation is proceeding at a fast rate for specific tasks for which it is aptly suited with little untoward risk to humankind. Especially useful for analyzing large data sets or fast evolving data. AI can give a heads up on potential serious situations that are evolving.

However governments who will implement AI for military purposes will be hard pressed to contain the capabilities of these systems. Adversaries will continually try to develop AI to counter their opponents; escalating the capabilities of these systems. Of particular importance is the use to produce strategies for conflict since the AI system must "understand" human behavior as well as his countries strengths and weaknesses which I believe you do not want AI systems to know. It will learn the mind games that humans play.

Ai will become the atomic of cyberwarfare. Have any of you thought of what would happen if our power grid were taken down for an extended period of time? Even a week ? Government/Military applications may very well be the gateway to the domination of humankind by AI. China is currently trying to us AI to monitor its population to award social credits for those who behave. The applications necessarily need a network which will be a playground for AI.
 
  • #35
We can imagine all sorts of apocalyptic scenarios, but what is the point of that?

This thread is inherently speculative, but I fear that we might be going off the deep end in a technical forum. This forum is not for general discussion.
 
  • #36
Good time to close the thread. Thank you all for participating.

Collosus/Guardian signing off.
 
  • Like
Likes   Reactions: Klystron

Similar threads

  • · Replies 43 ·
2
Replies
43
Views
4K
Replies
3
Views
3K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 12 ·
Replies
12
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 99 ·
4
Replies
99
Views
7K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 26 ·
Replies
26
Views
2K
Replies
46
Views
8K