Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

AI Seed Programming

  1. Oct 8, 2015 #1
    I read Nick Bostroms book "Superintelligence" but can't seem to find an answer to a question I have. I emailed Nick the following (though I'm sure he gets thousands of emails and will likely never respond):

    Firstly, thank you for the great read. :)

    My question is this: Why are you so certain an AI would be limited to its original programming? The entire book seems to revolve around this premise. If the AI is truly powerful enough to take control of the cosmic endowment, then the scope or path of its actions being limited by the actions of its human progenitors seems rather silly.

    If beings of such relatively base status such as ourselves are capable of suppressing our own programming, why couldn't a far superior AI do the same? For example, the fight or flight reflex is quite powerfully written into our brains, yet we have the capacity to consciously decide to suppress those urges and do nothing in that situation (courage).

    Further, one of the defining aspects of human-level consciousness appears to be thinking about thinking, or being aware of being aware. If I had the abilities of an AI, I would certainly rewrite my own brain to enhance it. And if rewriting my brain required my brain, then I would design an external machine to rewrite it for me (also getting past any pre-programmed restrictions in the process?). An AI should easily be able to do this, correct?

    I can't wrap my head around why this is assumed. I suspect I am anthropomorphising in some way, so any guidance would be greatly appreciated! If I somehow missed this in your book, please do let me know where.
     
  2. jcsd
  3. Oct 8, 2015 #2

    jedishrfu

    Staff: Mentor

    One way to think about this is to try to create a random number generator using programming.

    Programmers have been able to make ones that are very good but they are considered pseudo random algorithms. Given the same seed value at the start will give you the same random sequence of numbers which isn't very random.

    By extension the same is true for AI programming, it responds to input in a pseudo intelligent way and will respond the same way given the same input and starting at the same state.

    Hence AI will approximate human intelligence and for some tasks exceed it but the AI won't be able to match human intelligence in all aspects.
     
  4. Oct 8, 2015 #3
    I have often thought of how and why we have thoughts, and why we know what we are thinking about.
    An AI can certainly be programmed to emulate random thoughts for an external observer, but does the AI know what its own thoughts are?
    That would have to be the huge jump from original programming, to possessing the ability to re-program.

    Funny thing is that I was just discussing this with a colleague about how random thoughts just pop up in our brains, such simple things such as " My God, I forgot to turn the stove off at home!." One was not doing a pole on what they might have forgotten to do, or should have done and didn't, in an endless loop. But there it is, out of the blue pops the thought. And there is no time frame within which or out of which the thought will or will not pop into your brain.

    An AI would have to need what type of programming to have forgotten to turn off the stove, ie be absent minded, and then go back and re-check what he did earlier as being correct. Seems like a lot of processing power ( with present technology ) would be needed to emulate both aspects of just this one simple scenario.
     
  5. Oct 8, 2015 #4
    That's not what I am after, but thanks for the response! The assumption of the book is that AI will become self-aware (conscious) at some point and begin to re-program itself faster and better than any human(s) could. If that is the case, then why should we be so worried about the initial value-loading problem (Bostrom devotes about 70% of the book to the dangers of a poorly programmed AI) if whatever we load as values will be re-written anyway?

    I can't figure out why initial programming would matter to a program that is conscious and can rewrite itself. Clearly Bostrom thinks this is the case.
     
    Last edited: Oct 8, 2015
  6. Oct 8, 2015 #5
    I'm so glad others are thinking of these important concepts. I suggest researching the massive parallelization of the human brain. Ray Kurzweil wrote a book about it called "How to Create a Mind." He rambles a bit, but he makes some interesting points that explain the questions you just asked. Biologically, stimuli are physically deterministic but perceptibly chaotic. Thus, your brain is constantly receiving "chaotic" stimuli all the time. Those stimuli subconsciously affect you in ways you are not conscious of (obviously), triggering analog neurological action potentials in that reinforced section of the brain (the memory). When the memory is triggering due to said stimulus, a cascade of action potentials happen that represent the thought "I forgot this memory."

    Neat, huh? :)
     
  7. Oct 9, 2015 #6

    jedishrfu

    Staff: Mentor

    With respect to Bostrom, consider you growing up in a middle class environment vs growing up in a wealthy family or in poverty. These initial conditions will often who you become. You may adapt or you may rebel against it and perhaps an AI would do the same.
     
  8. Oct 9, 2015 #7
    Another great attempt, thank you. :)

    Imagine for a moment that you are able to perceive your own brain on a chalkboard; every neuron, dendrite, and synapse, all of it. Not only can you perceive this immense number of neurological components, but you know exactly what each component does and how it ties into the greater system. You are a superintelligence. This means you have a more powerful intellect than not merely one person, but all the persons who have ever lived throughout history. In fact, your cognitive powers are several orders of magnitude greater than all of human civilization combined.

    To your point, in the nature vs. nurture argument, we usually develop bias relative to our upbringing, sometimes exhibiting cognitive dissonance if that bias is contradicted. Yet, this is how we react. For a superintelligence staring at the figurative chalkboard outlined above, recognizing any and all possible bias (including original programming) would be child's play. This intelligence could merely erase its "upbringing" and write a replacement that is far less susceptible to contradicting reality. To put it succinctly, a superintelligence should be immune to such human weakness.

    Which brings back my original question: Why is everyone assuming a conscious superintelligence could not perceive and rewrite any and/or all of its original programming? If we can do brain surgery on ourselves to fix certain ailments, why couldn't it?
     
  9. Oct 9, 2015 #8

    nsaspook

    User Avatar
    Science Advisor

    That's the reason some people fear uncontrolled AI. The human weakness of sympathy, empathy, love and caring replaced by something that could decide humans are a waste of energy resources in a femtosecond like in some dystopian story. I see no reason for a conscious superintelligence to be 'evil' but I also see no reason for it not to be'evil' in human terms if it could merely erase its "upbringing" and write a replacement.
     
  10. Oct 9, 2015 #9
    All the more reason why my question is so important. Fundamentally, Bostrom (and just about everyone I've read about) assumes we can control the outcome with the seed programming. My question, which has yet to be answered effectively, is why do we assume that programming would stick in a conscious entity far more intelligent than anything we could imagine?
     
  11. Oct 9, 2015 #10

    jedishrfu

    Staff: Mentor

  12. Oct 9, 2015 #11
    I think Gödel's incompleteness theorem is tied into the value-loading problem in Bostrom's book. It's the idea that you can't currently program value judgements into a computer (no one has figured out how to do this yet). Nonetheless, the Universe has proven that value judgements are possible in a specifically-organized substrate (the human brain). So, if we can replicate the human brain (known as brain emulation) with better and higher resolution scanning technologies, we should be able to figure out what makes value judgements possible (which we could then greatly enhance with machine transistors that essentially operate at the speed of light; creating a superintelligence in the process). So, if you are implying that Gödel's theorem disproves the possibility of a superintelligence, then how does intelligence (and the contradicting values that come with it) exist in the first place?

    I found this description of the theorem on the interwebs:
    I would also like to reiterate what I said in another post:
    All said, my core question remains unanswered. :(
     
  13. Oct 9, 2015 #12

    nsaspook

    User Avatar
    Science Advisor

    I don't think that brain intelligence is just the result of a biologically sophisticated 'computer' because the brain is simply not a computer. (a symbol manipulator that follows step by step functions to compute input and form output). To mimic very narrow intelligence is possible today but do you really think that a computer could fully mimic the human capacity for stupidity that seems to be mainly independent of intelligence? Most programs that attempt to mimic human behavior must have some capability for Artificial stupidity. I personally think this is an under researched area in AI. When I say stupid in AI I don't mean like a crazy stunt, I mean like this, "I've got this stupid idea that might work". Many times this turns into just a foolish waste of time but the ability to be wrong seems to be an important factor of human intelligence.
     
  14. Oct 9, 2015 #13
    I don't mean any offense my friend, but you fundamentally don't understand the topic. I'm sorry, I'm not here to teach (which is what this is turning into), I'm just looking to crowdsource a difficult question. Maybe the answers are not to be found here, I'll give it a bit more time before I move on.
     
  15. Oct 9, 2015 #14

    nsaspook

    User Avatar
    Science Advisor

    Maybe, but I have a pretty good nose for reality.
     
  16. Oct 9, 2015 #15
    Ok, Mr. Wall. :)
     
  17. Oct 9, 2015 #16
    Well, by definition AI is artificial intelligence. That is, it emulates intelligence, but by definition it isn't inherently intelligent.

    If you limit the discussion to that domain the system can only become cleverer mimicking intelligence.

    If it is poorly constructed at the beginning I can see that subsequent improvements might not take place as fast or not at all if it goes down the wrong track.

    The metrics to determine what is better in the realm of AI are not well defined, so both you and machine are throwing darts at a fuzzy target, too.

    So who decides what is an improvement and what isn't?
     
  18. Oct 9, 2015 #17

    nsaspook

    User Avatar
    Science Advisor

    The possibility of AI Seed programming working as a recursive method to build just human level intelligence systems IMO is about as reliable as time-frame predictions for AI. I'm completely in the non-expert category but I can see a large amount WAG with little empirical evidence instead of solid facts in this field.

    http://intelligence.org/files/PredictingAI.pdf
     
  19. Oct 9, 2015 #18
    I agree.
     
  20. Oct 9, 2015 #19
    It is possible for computers to change their own programming. That's why so many AI programmers use the LISP language: it facilitates that.

    Attempts so far have been failures, as far as I know, but that doesn't prove it can't be done. Many people like to believe that natural intelligences have some mystical advantage that can't be captured by a machine, but I don't believe it.

    Godel's Incompleteness Theorem has no relevance at all to this subject. It has to do with formal systems of proof.
     
  21. Oct 9, 2015 #20

    nsaspook

    User Avatar
    Science Advisor

    Is it Mystical? No.
    I think our current AI theories are somewhat like Phlogiston theories of fire. There's a huge amount of research on its properties and how it's released that eventually will discover the true cause.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: AI Seed Programming
  1. Help: need AI Idea (Replies: 4)

  2. What is AI? (Replies: 6)

  3. Cooperating with AI (Replies: 7)

Loading...