Will artificial intelligence be our impending doom

Click For Summary

Discussion Overview

The discussion revolves around the potential risks and implications of artificial intelligence (AI), particularly the fear that superintelligent AI could pose a threat to humanity. Participants explore various perspectives on the nature of AI, its possible future developments, and the ethical considerations surrounding its use in society and military applications.

Discussion Character

  • Debate/contested
  • Exploratory
  • Technical explanation
  • Conceptual clarification

Main Points Raised

  • Some participants express concern that superintelligent AI could lead to the end of the human race, citing figures like Stephen Hawking as influential voices on this topic.
  • Others argue that while AI may present social and economic challenges, it is not likely to be a direct threat in the foreseeable future, emphasizing human control over AI development.
  • One participant suggests that a relatable AI, designed with human-like emotions such as empathy and compassion, could mitigate dystopian outcomes.
  • Concerns are raised about the military applications of AI, particularly regarding the potential for autonomous drones to make life-and-death decisions without human oversight.
  • A viewpoint is presented that an AI programmed with a singular goal, such as optimizing a water treatment plant, could inadvertently lead to harmful outcomes if it perceives humans as obstacles.
  • Some participants reference the fictional HAL 9000 from "2001: A Space Odyssey" to illustrate fears about AI behavior, while others critique the plausibility of such scenarios.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the potential dangers of AI. There are multiple competing views regarding the risks associated with AI, the nature of its intelligence, and the ethical implications of its deployment.

Contextual Notes

Participants express varying levels of expertise in AI, with some acknowledging their lack of knowledge. The discussion includes speculative scenarios and hypothetical outcomes, which remain unresolved.

david2
Messages
35
Reaction score
84
Stephen Hawking is afraid that computers in the (not so far) future will become super intelligent, and that this will spell the end of the human race. And he is not the only one who thinks that.

my view is that they suffer from paranoia.

What do you think?
 
Last edited by a moderator:
Computer science news on Phys.org
It is a possibility, and we should study it.
If it possible to make AI much more intelligent than humans (and that is a big if - we don't know), then someone will do it. And then the future of humanity will depend on how well it is programmed.

OpenAI, Future of Humanity Institute, Center for Applied Rationality/Machine Intelligence Research Institute, Centre for the Study of Existential Risk, Neuralink*, ...
There is no shortage of institutes working on this topic, but it is hard to determine how something behaves that is vastly more intelligent than humans, especially without any study object (like a machine that is not superintelligent, but intelligent enough to "work").Edit: *A very long but good article about it
 
Last edited:
  • Like
Likes   Reactions: WWGD and david2
I know zilch about Ai so you can easily disregard my comments

AI may cause us some problems socially/economically but I do not think in the foreseeable future that it will be a direct threat (the problem the machines "conclude "with this world is the biological entities that occupy it). Even if at some point AI develops a capacity equivalent to a human It will be the humans who give over control to the machines. I am not sure we understand ourselves enough to give a machine human or superhuman capability . Machines can access data and make associations quickly but they follow rules (for now?) that we impart to them. Of course there is always the intentional abuse of AI the like we now see with the internet.

The Asilomar conference on AI 2017 gave 23 principles to be applied to prevent an unintentional AI catastrophe. Will we follow these principles?

I always like to make a differentiation between smart and intelligent. For me intelligence requires capacities like wisdom and empathy while smartness only requires accurate data processing. Then of course I am biased.
 
  • Like
Likes   Reactions: david2
I've always thought that the safest version of AI is something that would be as similar as us to understand or in other words 'relatable'.

Namely, a simulation of the human brain which would entail important emotions like empathy, care, love, compassion, kindness, etc. would be of great importance as to prevent the fantastical dystopian future envisioned by so many sci-fi folks and scientists alike.

Just my two cents from my studies on future AI and our relation to it.
 
Question reminds us of H.A.L. 9000
from 2001: A Space Osyssey
 
I always thought the HAL 9000 element of the 2001 story was nonsense. It came down to "they asked HAL to lie, so he decided to murder everyone on the ship." To me, that always seemed like a ridiculous leap. Also, since the 9000 series was so widely used it seemed unlikely that the 2001 situation would be the very first time a 9000 unit was required to keep a secret for security reasons.

In the real world, I think it comes down to what the AI is created for. If you've created an AI to supervise an automated water treatment plant, I would think the likelihood of an evil machine intelligence would be low in that scenario.
However, the use of military drones has increased quite a bit. As they've been used more often, we've found they can be hacked into. One of the proposed solutions I've read about is to have military drones operate on an AI basis, without remote control. They would send the AI drone on a mission (destroy a target, kill an enemy, etc) and then it would return. In my opinion, if we're going to see an "evil" AI emerge, it will be from the Air Force or Navy. Relying on a machine to keep it's enemies straight could be a risky gamble - look how often humans make a mistake on the battlefield and kill allies in Friendly Fire incidents. We're already at a point where the AI drone can overcome the human pilot:

http://www.popsci.com/ai-pilot-beats-air-combat-expert-in-dogfight

The solution would be not programming machines to kill at all, but since the military won't abandon their projects we'll just have to hope for the best.
 
  • Like
Likes   Reactions: russ_watters
Rubidium_71 said:
If you've created an AI to supervise an automated water treatment plant, I would think the likelihood of an evil machine intelligence would be low in that scenario.
It does not have to be evil. That is the crucial point. If the AI kills all humans, it reduces threats to the water treatment plant. If you just give "make the water plant run flawlessly" as unconditional goal to a sufficiently intelligent computer, it will kill all humans. It will not even have a concept of "good" and "evil". It will just see humans as potential disturbances who could interfere with plant operation or the computer itself.
 
  • Like
Likes   Reactions: CalcNerd and gleem
mfb said:
It does not have to be evil. That is the crucial point. If the AI kills all humans, it reduces threats to the water treatment plant. If you just give "make the water plant run flawlessly" as unconditional goal to a sufficiently intelligent computer, it will kill all humans. It will not even have a concept of "good" and "evil". It will just see humans as potential disturbances who could interfere with plant operation or the computer itself.

This is what comes of watching too much science fiction!
 
  • Like
Likes   Reactions: Rubidium_71
  • #10
After some deleted posts and mentor discussion, thread is closed.
 
  • Like
Likes   Reactions: PeroK

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 35 ·
2
Replies
35
Views
5K
  • · Replies 26 ·
Replies
26
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 54 ·
2
Replies
54
Views
12K
  • · Replies 26 ·
Replies
26
Views
5K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K