Will artificial intelligence be our impending doom

In summary, Stephen Hawking and other experts fear that the development of super intelligent computers could lead to the downfall of humanity. However, some believe that this view may be fueled by paranoia. There are numerous institutes studying this topic and while there are concerns regarding the potential social and economic impacts of AI, it is unlikely to pose a direct threat in the near future. The Asilomar conference has proposed principles to prevent an unintentional AI catastrophe, but it remains to be seen if they will be followed. Some suggest that the safest form of AI would be one that is relatable and has human-like emotions, but others question the feasibility of creating such a machine. In reality, the potential dangers of AI may depend on its intended purpose and
  • #1
david2
35
84
Stephen Hawking is afraid that computers in the (not so far) future will become super intelligent, and that this will spell the end of the human race. And he is not the only one who thinks that.

my view is that they suffer from paranoia.

What do you think?
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
It is a possibility, and we should study it.
If it possible to make AI much more intelligent than humans (and that is a big if - we don't know), then someone will do it. And then the future of humanity will depend on how well it is programmed.

OpenAI, Future of Humanity Institute, Center for Applied Rationality/Machine Intelligence Research Institute, Centre for the Study of Existential Risk, Neuralink*, ...
There is no shortage of institutes working on this topic, but it is hard to determine how something behaves that is vastly more intelligent than humans, especially without any study object (like a machine that is not superintelligent, but intelligent enough to "work").Edit: *A very long but good article about it
 
Last edited:
  • Like
Likes WWGD and david2
  • #3
I know zilch about Ai so you can easily disregard my comments

AI may cause us some problems socially/economically but I do not think in the foreseeable future that it will be a direct threat (the problem the machines "conclude "with this world is the biological entities that occupy it). Even if at some point AI develops a capacity equivalent to a human It will be the humans who give over control to the machines. I am not sure we understand ourselves enough to give a machine human or superhuman capability . Machines can access data and make associations quickly but they follow rules (for now?) that we impart to them. Of course there is always the intentional abuse of AI the like we now see with the internet.

The Asilomar conference on AI 2017 gave 23 principles to be applied to prevent an unintentional AI catastrophe. Will we follow these principles?

I always like to make a differentiation between smart and intelligent. For me intelligence requires capacities like wisdom and empathy while smartness only requires accurate data processing. Then of course I am biased.
 
  • Like
Likes david2
  • #4
I've always thought that the safest version of AI is something that would be as similar as us to understand or in other words 'relatable'.

Namely, a simulation of the human brain which would entail important emotions like empathy, care, love, compassion, kindness, etc. would be of great importance as to prevent the fantastical dystopian future envisioned by so many sci-fi folks and scientists alike.

Just my two cents from my studies on future AI and our relation to it.
 
  • #5
Question reminds us of H.A.L. 9000
from 2001: A Space Osyssey
 
  • #6
I always thought the HAL 9000 element of the 2001 story was nonsense. It came down to "they asked HAL to lie, so he decided to murder everyone on the ship." To me, that always seemed like a ridiculous leap. Also, since the 9000 series was so widely used it seemed unlikely that the 2001 situation would be the very first time a 9000 unit was required to keep a secret for security reasons.

In the real world, I think it comes down to what the AI is created for. If you've created an AI to supervise an automated water treatment plant, I would think the likelihood of an evil machine intelligence would be low in that scenario.
However, the use of military drones has increased quite a bit. As they've been used more often, we've found they can be hacked into. One of the proposed solutions I've read about is to have military drones operate on an AI basis, without remote control. They would send the AI drone on a mission (destroy a target, kill an enemy, etc) and then it would return. In my opinion, if we're going to see an "evil" AI emerge, it will be from the Air Force or Navy. Relying on a machine to keep it's enemies straight could be a risky gamble - look how often humans make a mistake on the battlefield and kill allies in Friendly Fire incidents. We're already at a point where the AI drone can overcome the human pilot:

http://www.popsci.com/ai-pilot-beats-air-combat-expert-in-dogfight

The solution would be not programming machines to kill at all, but since the military won't abandon their projects we'll just have to hope for the best.
 
  • Like
Likes russ_watters
  • #7
Rubidium_71 said:
If you've created an AI to supervise an automated water treatment plant, I would think the likelihood of an evil machine intelligence would be low in that scenario.
It does not have to be evil. That is the crucial point. If the AI kills all humans, it reduces threats to the water treatment plant. If you just give "make the water plant run flawlessly" as unconditional goal to a sufficiently intelligent computer, it will kill all humans. It will not even have a concept of "good" and "evil". It will just see humans as potential disturbances who could interfere with plant operation or the computer itself.
 
  • Like
Likes CalcNerd and gleem
  • #8
mfb said:
It does not have to be evil. That is the crucial point. If the AI kills all humans, it reduces threats to the water treatment plant. If you just give "make the water plant run flawlessly" as unconditional goal to a sufficiently intelligent computer, it will kill all humans. It will not even have a concept of "good" and "evil". It will just see humans as potential disturbances who could interfere with plant operation or the computer itself.

This is what comes of watching too much science fiction!
 
  • Like
Likes Rubidium_71
  • #10
After some deleted posts and mentor discussion, thread is closed.
 
  • Like
Likes PeroK

1. Will artificial intelligence become smarter than humans?

There is currently no evidence to suggest that artificial intelligence will surpass human intelligence. While AI may be able to perform specific tasks better and faster than humans, it lacks the ability to think abstractly and make complex decisions like humans can.

2. Can AI pose a threat to humanity?

There is a possibility that AI could pose a threat to humanity if it is programmed incorrectly or if it gains access to weapons or other dangerous technology. However, many researchers and experts are working on creating ethical guidelines and safety measures to prevent this from happening.

3. Will AI take over all human jobs?

AI has the potential to automate many tasks and replace certain human jobs, but it is unlikely that it will completely take over all jobs. Human workers will still be needed for tasks that require creativity, critical thinking, and emotional intelligence, which are currently difficult for AI to replicate.

4. How can we ensure that AI is used for good and not for harm?

It is important for AI developers and researchers to prioritize ethical considerations and incorporate them into the design and development process. Governments and organizations can also establish regulations and guidelines to ensure that AI is used responsibly and for the benefit of society.

5. What are the potential benefits of AI?

AI has the potential to make our lives easier and more efficient by automating mundane and repetitive tasks. It can also assist in solving complex problems in various fields such as healthcare, transportation, and finance. Additionally, AI can help us gain new insights and make advancements in scientific research and development.

Similar threads

  • General Discussion
Replies
5
Views
877
  • General Discussion
Replies
10
Views
874
Replies
1
Views
929
  • Computing and Technology
Replies
1
Views
281
  • Computing and Technology
2
Replies
35
Views
4K
  • Programming and Computer Science
3
Replies
77
Views
5K
  • General Discussion
Replies
3
Views
829
  • General Discussion
Replies
9
Views
1K
  • General Discussion
Replies
24
Views
1K
  • STEM Career Guidance
Replies
3
Views
1K
Back
Top