Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Destined to build a super AI that will destroy us?

  1. Oct 3, 2016 #1
    Sam Harris had a TED Talk on this not long ago. What do you think? Are we taking the dangers seriously enough or does Sam Harris have it wrong?

     
    Last edited: Apr 28, 2017
  2. jcsd
  3. Oct 3, 2016 #2

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    It sounds half-way plausible to me. But I think the reasons that people don't take it seriously are (1) it's just too hard for those of us who are not super-intelligent to comprehend the implications of super-intelligence, and (2) people don't like to think about problems that they have no idea how to even begin to solving. Harris makes the comparison with global warming, but we're not really, as a species, taking that very seriously, either. If a problem involves cooperation among all major countries on Earth, you can pretty much bet that it isn't going to happen.

    One thing that he maybe glosses over is the distinction between being intelligent and having a self, in the sense of having emotions, desires, beliefs, goals, etc. He talks about the AI having goals that conflict with those of humans, but an AI is not necessarily going to have any goals at all, other than those put in by the programmer. (Maybe they could evolve goals?)
     
  4. Oct 3, 2016 #3
    So we're kinda doomed as a species. I mean obviously at some point, nothing last forever, but this could spell our demise prematurely.

    I think that is what Harris says could happen especially if safeguards aren't put in.
     
  5. Oct 3, 2016 #4

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    I feel that global warming, depletion of natural resources, destruction of the ecosystem and overpopulation might get first.
     
  6. Oct 3, 2016 #5

    anorlunda

    User Avatar
    Science Advisor
    Gold Member

    A positive spin on AI taking over is the technological singularity , scheduled for 2045 according to Ray Kurzweil. I like his version of the vision.

    I look at it as the next step in evolution. The view that homo sapians are the end game of evolution and must not be succeeded seems slightly creationist. Sure it's scary and not gentle. Go back and read Arthur C Clarke's Childhoods End for a scarier scenario.
     
  7. Oct 3, 2016 #6

    QuantumQuest

    User Avatar
    Gold Member

    In my opinion, the warning by Sam Harris is generally right and not very hard for anyone to imagine, but I sort of disagree to the timing he puts. Fact is that a self - replicating "machine" - if and when we reach as humans that stage, with whatever software and hardware mix, can get pretty easy out of control. Something that is today at a stage of full control, can evolve fast enough to something uncontrollable, in an irreversible way. And this irreversibility has more to do with the economic and social - political culture, that this whole process creates, during the evolution of a such machine generation. An equally important factor, is that it will be utilized as a weapon of any kind and this just feeds its evolution. Extreme under - employment will be just byproduct of all this. Now, I think that this is fully viable, but pretty extreme scenario, that the big countries that fund research, won't let happen just like that. So, I think that what Sam Harris describes, won't happen in fifty years or whatever short time. I also agree to stevendaryl about the two reasons that people don't take it seriously. It would be really perfect if that wasn't the case.

    But I also have to point out the good things that such extreme evolution will bring. It really is absolutely viable, to conquer many diseases and various other things that are crucial to our everyday lives. One thing that I regard as bad, is that control of all this, is heavily influenced by many idiosyncratic things of our species. I also agree that other things will outpace AI endeavors, like global warming, depletion of natural resources and especially overpopulation. In my opinion, the correctness of this statement follows naturally, from the fact that all these are already out of control, in an irreversible way.
     
  8. Oct 3, 2016 #7

    jack action

    User Avatar
    Science Advisor
    Gold Member

    That is pure nonsense.

    Intelligence is the capacity to imagine things.

    Say we built a machine that have so much imagination that we as human being cannot understand what it spits out. Are we going to built this machine such that it can be autonomous? Why would we do that? It's like an ape that would try to raise a human being in hope that he would build a space ship. Such a human being will only be require to do ape things. It might even get kill rather quickly if it diverges too much from the group. I can't imagine an ape looking at a human being building a space ship and that this ape would work to feed him, protect him, care for him and cover all of his needs without even understanding what he does.

    The other question is: Can we built autonomous things? Nature is autonomous, but it is extremely complex. Can we built a system that would replicate nature? We take animals or plants and just try to make them reproduce themselves in captivity and we fail. Is it reasonable to think that we could make machines that will be able to reproduce or repair themselves or just protect themselves for that matter? Let's say we built a machine that can imagine how to create such a system (because we can't): Will we be able to understand what it has thought of and reproduce it? If so, why would we do it for the machine and give it its autonomy? What would we gain to do such a large amount of work (I can only imagine that it won't be easy)? Or are we suppose to think that this machine will build this system without us knowing about it; We will only be those idiots feeding this machine doing things we don't understand?

    This idea that we will build machines that will cover the earth AND that will all be connected to one another AND that will - without warning - get suddenly smart enough to be autonomous AND that will be stronger than humanity AND that will think it is a good idea to destroy - or even just ignore - humanity, is a good script for a science-fiction movie at best.
     
  9. Oct 3, 2016 #8

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    I really can't understand your argument. As Sam Harris points out, we have lots of really good reasons for making smarter and smarter machines, and we are doing it, already. Will the incremental improvements to machine intelligence get all the way to human-level intelligence? It's hard to know, but I don't see how your argument proves that it is nonsense.
     
  10. Oct 4, 2016 #9

    Filip Larsen

    User Avatar
    Gold Member

    As I see it, we are already doing it, even if this is not a goal in itself.

    Currently the promise of deep learning and big data are desensitizing most of us to the objection of employing algorithms we don't really understand the result of and where the result can change autonomously (by learning) to give "better" results. That is, we will mostly likely end up accepting, that learning systems we don't understand how works will be employed everywhere its beneficial and feasible. It seems fairly obvious that autonomously interacting and data-sharing learning systems eventually can be made to outperform non-interacting systems simply because then each system can provide even more benefit to its users and owner, and from then on you pretty much get an increasing degree of autonomy. In short, the promise and actual delivery of fantastic possibilities will in all likelihood drive us to interconnect and automate intelligent systems like never before and the inherent complexity will at the same time dwindle our understanding of any eventually set of negative consequences or even how to recognize them. If something negative do happen, I think we would be hard pressed to "disconnect" our systems again unless it was really bad. And even if all people across nations, cultures and subcultures work towards ensuring something very bad never happens (good luck with that, by the way) then I question if we can really expect to be in control of a global interconnected learning system that we don't understand in the same way as we in isolation would understand a car, a medical device, or an elevator.

    Of course, all this does not imply that something very bad will happen or that we will end up like dumb humans cared for by smart machines, only that unless we have good reasons to think otherwise it is a likely possibility. And I have yet to hear any such reasons.
     
  11. Oct 4, 2016 #10

    Bystander

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    I would say, "He has it wrong." Mortality rates due to industrial accidents decline as industries "mature." "AI" is just another industry.
     
  12. Oct 4, 2016 #11
    Or a more likely scenario would be smart humans cared for by dumb machines.

    A gloom and doom story.
    from the talk it can be concluded that it is never ending.
    I guess it is AI all the way down. Humans build AI ensuring humans destruction. AI build super-AI ensuring AI destruction. What next? super-duper-AI,... ; each generation with more intelligence destroys the previous? Pretty sure the logic points that were presented in the TED talk were not all that well thought out.
     
    Last edited: Oct 4, 2016
  13. Oct 4, 2016 #12
    A common approach towards developing an AI is to build a neural network. These are often 'trained' by defining a 'fitness' function against which variations in a population of networks are measured. It seems quite plausible to me that these functions might themselves be programmed to allow some variation which over time could develop in completely unpredictable directions.

    Given the ease with which most things can now be obtained over the internet, the developments in visual recognition software and autonomous vehicles and robots I do not think it will be too long until someone builds a population of self maintaining machines, with spare parts ordered automatically.

    The machines do not even need to repair themselves - consider a fleet of autonomous vehicles programmed to return to service bases periodically. As long as they continue to perform the basic functions expected of them, their internal programming could evolve in any number of ways without attracting unwanted attention from human 'overseers'. Over time they become ubiquitous and an essential part of life - an ideal symbiosis or the an inevitable step towards the subjugation of mankind?
     
  14. Oct 4, 2016 #13

    Filip Larsen

    User Avatar
    Gold Member

    There are plenty of scenarios where we for some reason end up with technology that do not lead to machines becoming much smarter than an average human, but the point Harris (and others, including me) is trying to make is, that those reasons, should they exist, are currently unknown. That is, so far there is nothing to exclude that we are heading towards a future where we are forced or choose to give up control in a way we so far have been used to have. The best argument most people seem to be able to establish is that we won't allow yourself to loose control, yet they are unable to point at any mechanism or principle that would would make scenarios with loss of control impossible or at the very least extremely unlikely.

    I am tilting towards the opinion that people who think about this but are not concerned perhaps have adopted a kind of fatalistic attitude where they embrace change with less concern about being in control or not, or with a belief that we over time by some kind of luck or magic never seriously will loose control with our technology. I am genuinely puzzled why people involved in driving this technology do not seem to be concerned yet unable to provide technical arguments that would make me equally unconcerned.

    If I were banging explosive together in my kitchen, it would be sane for my neighbors to require of me to explain in technical terms why there is no risk involved or, failing that, explain what measures I take to ensure (not just make likely) that nothing bad will happen. Why is AI (or any disruptive technology) any different?
     
  15. Oct 4, 2016 #14

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    That's true. My point is that there is a distinction between understanding and action. Your understanding might be godlike, in the sense that you can predict the future with perfect accuracy (or as well as is theoretically possible, given that things are nondeterministic), but you still may have no goals---no reason to prefer one future over another, and no reason to take actions to assure one future or another.

    In the natural world, understanding and action have always evolved in tandem. There is no selective pressure to understand things better unless that better understanding leads to improved chances for survival and reproductive success. In contrast, with machines, the two have developed separately---we develop computer models for understanding that are completely separated from actions by the computers themselves. The programs make their analyses and just present them to the human to do with as they like. We also, more-or-less independently have developed machine capabilities for action that were under the control of humans. That is, the goals themselves don't come from the machine. As far as I know, there hasn't been much AI research done about machines developing their own goals (except as means to an end, where the end was specified by the humans). It certainly seems possible that machines could develop their own goals, but I think you would need something like natural selection for that to happen--you'd need robot reproduction where only the fittest survive. I suppose that's possible, but it's not a situation that we have currently.
     
  16. Oct 4, 2016 #15
    A quick internet search revealed this article from yesterday: https://www.technologyreview.com/s/602529/google-is-building-a-robotic-hive-mind-kindergarten/

    There are also lots of teams developing 'attention seeking' robots. Presumably it would not be too hard to link the two ideas so you have teams of attention seeking robots learning from each other. These could easily be released as toys, becoming popular in the richer nations. There are many types of action which meet the loose goal of 'attention seeking' which might be outside the intended parameters: vandalism or inflicting pain might in the short term attract attention far more effectively than performing a dance routine... My point is that almost any goal can lead to unintended consequences.
     
  17. Oct 4, 2016 #16

    jack action

    User Avatar
    Science Advisor
    Gold Member

    Making smarter machines is not nonsense; Thinking they will endanger the human specie is.

    We may not understands the how, but we understand the results. Otherwise why would we keep a machine working that gives us things we don't understand? We will assume it's garbage and the machine is not working properly. We certainly are not going to give it the control of all nuclear missiles on earth.

    Having a machine making one task autonomously is not having an autonomous machine; Nor does having one able to order spare parts on ebay.

    First, a single machine cannot be autonomous, just like humans are not. We need plants, insects and the rest of the animal kingdom. We are part of a complex system where all depend on each other to survive. Does the flowers need the bees to reproduce themselves or does the bees need the flowers to nourish themselves? No one can tell.

    For machines to develop such a system that wouldn't include humans is nonsense (Something like «The Terminator» or «The Matrix» movies). Imagining that even a system where machines could develop an independent society so fast that humans are unaware of it, one where we can't flip a switch off or we can't inject a virus to kill the brain, that is also nonsense.

    The machines we make are not that reliable. To do that kind of magic, you need adaptability, something organic life form can do. You also need diversity, so having a Hal 9000, Skynet or VIKI controlling all machines on earth would be impossible as it is a stupid path to take, evolution-wise. It creates a single «weak spot» that puts in peril the entire system.

    I just can't imagine that all of this evolution could happen so fast without us being able to notice it. I can't imagine that the first thing these machines will do is develop «sneakiness» to avoid detection.
     
  18. Oct 4, 2016 #17

    Filip Larsen

    User Avatar
    Gold Member

    I am referring to autonomy to make decisions. An autonomous control system is a system capable of carrying out its functions independent of any outside decision-making or guidance. It gets its autonomy because we built it into the system and it retains its autonomy as long as we trust the system.

    The ability of learning systems to successfully discover patterns in large amount of data is rapidly increasing and already surpassing humans in many areas. Using such systems to assist in human decision-making is just around the corner, for instance in healthcare with Watson [2]. From that "starting point" is it most likely that we will keep striving to expand and hand over more and more decisions to the autonomic systems simply because such systems can be made to perform better than we do.

    So, my concern is personally not so much that superior decision making systems take control from us, but that we willingly hand over control to such systems without even the slightest worry about getting control back if needed. To me it feels that most people like to think or accept that such better performance is a goal in itself, almost no matter what other consequences or changes it will have on our future. The promises of a golden future where new technology like AI make our big problems go away or severely reduced them also seem to makes people less concerned about loosing control of the systems along the way.

    The concern is not the Hollywood scenarios, but the scenarios that we are currently heading into by our the research and drive for automation that happens right now or in the near future.

    So you are calm and confident that no servere problems can happen simply because you cannot imagine how fast a learning algorithm can dig out patterns in big data or that someone will use this to solve a problem you don't want solved (like if you should keep your job or not)?

    Imagine that some research company builds a completely new type of nuclear reactor very near your home. What will it take for you to remain unconcerned about this new reactor? Would you expect any guarantees from relevant authorities? What if all the guarantee you ever learn about is a statement from the company itself saying that they have tried this this new reaction in a lab without problem and that you really should relax and look forward to cheap electricity?


    [1] https://en.wikipedia.org/wiki/Autonomous_robot
    [2] https://www.ibm.com/watson/health/
     
  19. Oct 4, 2016 #18
    Are you assuming here that we will build in some sort of control like Asimov's rules? The problem here is that we would all have to agree before we go down the AI path and I'm fairly sure that we won't all agree. Then as the intelligence of the AI gets large it will surely find a way around our simplistic rules to limit it.
     
  20. Oct 4, 2016 #19
    Almost certainly. I see no limit to theoretical intelligence, if our brains can do it, so can computers. Thinking we are anywhere near the top of possible capabilities is very anthopocentric. We'll hit an AI singularity where the AI builds the next generation of AI, perhaps with an evolutionary algorithm. Simulated evolution can do a million generations in the time real biology does one. I see this as the most likely solution to the Fermi Paradox.
     
  21. Oct 4, 2016 #20
    I have a couple of significant problems with his descriptions.

    First, there is no "general intelligence" which is a goal of government or industry. When machines become "smarter", they don't become more like humans in their thinking. Even when we do start using technology similar to what in our skulls, it won't be a human brain duplicate or improvement - not unless that is a very deliberate goal.

    Second, there's no chance of having an intelligent computer take over by accident. The danger does not come by making the computer smarter. It comes by connecting the computer to motor controls. You don't do that without a lot of caution. You can have a computer design a better computer and a better weapon system, and you would presumably also have it develop a better test protocol and safety systems. If you then decide to turn over the entire weapons development over to the computer, what will transpire is not an "accident", it's a deliberate act.

    Still there is a problem. What computers and technology are doing (as correctly depicted in the TED talk) is empowering individuals - not all individuals, just the ones that develop or procure them. When the "technological singularity" happens, it will happen to someone - someone who will have the ability to overpower all competitors and governments. If that person wanted to share or contain that power, who would he/she trust?
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Destined to build a super AI that will destroy us?
  1. Help: need AI Idea (Replies: 4)

  2. What is AI? (Replies: 6)

  3. Cooperating with AI (Replies: 7)

  4. AI Seed Programming (Replies: 19)

Loading...