Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Will artificial intelligence be our impending doom

  1. Jun 7, 2017 #1
    Stephen Hawking is afraid that computers in the (not so far) future will become super intelligent, and that this will spell the end of the human race. And he is not the only one who thinks that.

    my view is that they suffer from paranoia.

    What do you think?
     
    Last edited by a moderator: Jun 7, 2017
  2. jcsd
  3. Jun 7, 2017 #2

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    It is a possibility, and we should study it.
    If it possible to make AI much more intelligent than humans (and that is a big if - we don't know), then someone will do it. And then the future of humanity will depend on how well it is programmed.

    OpenAI, Future of Humanity Institute, Center for Applied Rationality/Machine Intelligence Research Institute, Centre for the Study of Existential Risk, Neuralink*, ...
    There is no shortage of institutes working on this topic, but it is hard to determine how something behaves that is vastly more intelligent than humans, especially without any study object (like a machine that is not superintelligent, but intelligent enough to "work").


    Edit: *A very long but good article about it
     
    Last edited: Jun 7, 2017
  4. Jun 7, 2017 #3
    I know zilch about Ai so you can easily disregard my comments

    AI may cause us some problems socially/economically but I do not think in the foreseeable future that it will be a direct threat (the problem the machines "conclude "with this world is the biological entities that occupy it). Even if at some point AI develops a capacity equivalent to a human It will be the humans who give over control to the machines. I am not sure we understand ourselves enough to give a machine human or superhuman capability . Machines can access data and make associations quickly but they follow rules (for now?) that we impart to them. Of course there is always the intentional abuse of AI the like we now see with the internet.

    The Asilomar conference on AI 2017 gave 23 principles to be applied to prevent an unintentional AI catastrophe. Will we follow these principles?

    I always like to make a differentiation between smart and intelligent. For me intelligence requires capacities like wisdom and empathy while smartness only requires accurate data processing. Then of course I am biased.
     
  5. Jun 7, 2017 #4

    n01

    User Avatar

    I've always thought that the safest version of AI is something that would be as similar as us to understand or in other words 'relatable'.

    Namely, a simulation of the human brain which would entail important emotions like empathy, care, love, compassion, kindness, etc. would be of great importance as to prevent the fantastical dystopian future envisioned by so many sci-fi folks and scientists alike.

    Just my two cents from my studies on future AI and our relation to it.
     
  6. Jun 7, 2017 #5

    symbolipoint

    User Avatar
    Homework Helper
    Education Advisor
    Gold Member

    Question reminds us of H.A.L. 9000
    from 2001: A Space Osyssey
     
  7. Jun 8, 2017 #6
    I always thought the HAL 9000 element of the 2001 story was nonsense. It came down to "they asked HAL to lie, so he decided to murder everyone on the ship." To me, that always seemed like a ridiculous leap. Also, since the 9000 series was so widely used it seemed unlikely that the 2001 situation would be the very first time a 9000 unit was required to keep a secret for security reasons.

    In the real world, I think it comes down to what the AI is created for. If you've created an AI to supervise an automated water treatment plant, I would think the likelihood of an evil machine intelligence would be low in that scenario.
    However, the use of military drones has increased quite a bit. As they've been used more often, we've found they can be hacked into. One of the proposed solutions I've read about is to have military drones operate on an AI basis, without remote control. They would send the AI drone on a mission (destroy a target, kill an enemy, etc) and then it would return. In my opinion, if we're going to see an "evil" AI emerge, it will be from the Air Force or Navy. Relying on a machine to keep it's enemies straight could be a risky gamble - look how often humans make a mistake on the battlefield and kill allies in Friendly Fire incidents. We're already at a point where the AI drone can overcome the human pilot:

    http://www.popsci.com/ai-pilot-beats-air-combat-expert-in-dogfight

    The solution would be not programming machines to kill at all, but since the military won't abandon their projects we'll just have to hope for the best.
     
  8. Jun 8, 2017 #7

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    It does not have to be evil. That is the crucial point. If the AI kills all humans, it reduces threats to the water treatment plant. If you just give "make the water plant run flawlessly" as unconditional goal to a sufficiently intelligent computer, it will kill all humans. It will not even have a concept of "good" and "evil". It will just see humans as potential disturbances who could interfere with plant operation or the computer itself.
     
  9. Jun 8, 2017 #8

    PeroK

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    This is what comes of watching too much science fiction!
     
  10. Jun 8, 2017 #9

    Averagesupernova

    User Avatar
    Gold Member

  11. Jun 8, 2017 #10

    Evo

    User Avatar

    Staff: Mentor

    After some deleted posts and mentor discussion, thread is closed.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Will artificial intelligence be our impending doom
  1. Artificial intelligence (Replies: 16)

  2. Artificial intelligence. (Replies: 32)

Loading...