AI Seed Programming: Questions & Answers for Nick Bostrom's Superintelligence

In summary: If the AI can rewrite itself, why does initial programming matter?In summary, the book discusses the possible dangers of a self-aware artificial intelligence, and argues that initial programming (specifically the loading of values into the AI) is essential for the AI to be able to achieve human levels of consciousness.
  • #1
Aaron8547
16
2
I read Nick Bostroms book "Superintelligence" but can't seem to find an answer to a question I have. I emailed Nick the following (though I'm sure he gets thousands of emails and will likely never respond):

Firstly, thank you for the great read. :)

My question is this: Why are you so certain an AI would be limited to its original programming? The entire book seems to revolve around this premise. If the AI is truly powerful enough to take control of the cosmic endowment, then the scope or path of its actions being limited by the actions of its human progenitors seems rather silly.

If beings of such relatively base status such as ourselves are capable of suppressing our own programming, why couldn't a far superior AI do the same? For example, the fight or flight reflex is quite powerfully written into our brains, yet we have the capacity to consciously decide to suppress those urges and do nothing in that situation (courage).

Further, one of the defining aspects of human-level consciousness appears to be thinking about thinking, or being aware of being aware. If I had the abilities of an AI, I would certainly rewrite my own brain to enhance it. And if rewriting my brain required my brain, then I would design an external machine to rewrite it for me (also getting past any pre-programmed restrictions in the process?). An AI should easily be able to do this, correct?

I can't wrap my head around why this is assumed. I suspect I am anthropomorphising in some way, so any guidance would be greatly appreciated! If I somehow missed this in your book, please do let me know where.
 
Technology news on Phys.org
  • #2
One way to think about this is to try to create a random number generator using programming.

Programmers have been able to make ones that are very good but they are considered pseudo random algorithms. Given the same seed value at the start will give you the same random sequence of numbers which isn't very random.

By extension the same is true for AI programming, it responds to input in a pseudo intelligent way and will respond the same way given the same input and starting at the same state.

Hence AI will approximate human intelligence and for some tasks exceed it but the AI won't be able to match human intelligence in all aspects.
 
  • #3
I have often thought of how and why we have thoughts, and why we know what we are thinking about.
An AI can certainly be programmed to emulate random thoughts for an external observer, but does the AI know what its own thoughts are?
That would have to be the huge jump from original programming, to possessing the ability to re-program.

Funny thing is that I was just discussing this with a colleague about how random thoughts just pop up in our brains, such simple things such as " My God, I forgot to turn the stove off at home!." One was not doing a pole on what they might have forgotten to do, or should have done and didn't, in an endless loop. But there it is, out of the blue pops the thought. And there is no time frame within which or out of which the thought will or will not pop into your brain.

An AI would have to need what type of programming to have forgotten to turn off the stove, ie be absent minded, and then go back and re-check what he did earlier as being correct. Seems like a lot of processing power ( with present technology ) would be needed to emulate both aspects of just this one simple scenario.
 
  • #4
jedishrfu said:
One way to think about this is to try to create a random number generator using programming.

Programmers have been able to make ones that are very good but they are considered pseudo random algorithms. Given the same seed value at the start will give you the same random sequence of numbers which isn't very random.

By extension the same is true for AI programming, it responds to input in a pseudo intelligent way and will respond the same way given the same input and starting at the same state.

Hence AI will approximate human intelligence and for some tasks exceed it but the AI won't be able to match human intelligence in all aspects.

That's not what I am after, but thanks for the response! The assumption of the book is that AI will become self-aware (conscious) at some point and begin to re-program itself faster and better than any human(s) could. If that is the case, then why should we be so worried about the initial value-loading problem (Bostrom devotes about 70% of the book to the dangers of a poorly programmed AI) if whatever we load as values will be re-written anyway?

I can't figure out why initial programming would matter to a program that is conscious and can rewrite itself. Clearly Bostrom thinks this is the case.
 
Last edited:
  • #5
256bits said:
I have often thought of how and why we have thoughts, and why we know what we are thinking about.
An AI can certainly be programmed to emulate random thoughts for an external observer, but does the AI know what its own thoughts are?
That would have to be the huge jump from original programming, to possessing the ability to re-program.

Funny thing is that I was just discussing this with a colleague about how random thoughts just pop up in our brains, such simple things such as " My God, I forgot to turn the stove off at home!." One was not doing a pole on what they might have forgotten to do, or should have done and didn't, in an endless loop. But there it is, out of the blue pops the thought. And there is no time frame within which or out of which the thought will or will not pop into your brain.

An AI would have to need what type of programming to have forgotten to turn off the stove, ie be absent minded, and then go back and re-check what he did earlier as being correct. Seems like a lot of processing power ( with present technology ) would be needed to emulate both aspects of just this one simple scenario.

I'm so glad others are thinking of these important concepts. I suggest researching the massive parallelization of the human brain. Ray Kurzweil wrote a book about it called "How to Create a Mind." He rambles a bit, but he makes some interesting points that explain the questions you just asked. Biologically, stimuli are physically deterministic but perceptibly chaotic. Thus, your brain is constantly receiving "chaotic" stimuli all the time. Those stimuli subconsciously affect you in ways you are not conscious of (obviously), triggering analog neurological action potentials in that reinforced section of the brain (the memory). When the memory is triggering due to said stimulus, a cascade of action potentials happen that represent the thought "I forgot this memory."

Neat, huh? :)
 
  • Like
Likes 256bits
  • #6
With respect to Bostrom, consider you growing up in a middle class environment vs growing up in a wealthy family or in poverty. These initial conditions will often who you become. You may adapt or you may rebel against it and perhaps an AI would do the same.
 
  • #7
jedishrfu said:
With respect to Bostrom, consider you growing up in a middle class environment vs growing up in a wealthy family or in poverty. These initial conditions will often who you become. You may adapt or you may rebel against it and perhaps an AI would do the same.

Another great attempt, thank you. :)

Imagine for a moment that you are able to perceive your own brain on a chalkboard; every neuron, dendrite, and synapse, all of it. Not only can you perceive this immense number of neurological components, but you know exactly what each component does and how it ties into the greater system. You are a superintelligence. This means you have a more powerful intellect than not merely one person, but all the persons who have ever lived throughout history. In fact, your cognitive powers are several orders of magnitude greater than all of human civilization combined.

To your point, in the nature vs. nurture argument, we usually develop bias relative to our upbringing, sometimes exhibiting cognitive dissonance if that bias is contradicted. Yet, this is how we react. For a superintelligence staring at the figurative chalkboard outlined above, recognizing any and all possible bias (including original programming) would be child's play. This intelligence could merely erase its "upbringing" and write a replacement that is far less susceptible to contradicting reality. To put it succinctly, a superintelligence should be immune to such human weakness.

Which brings back my original question: Why is everyone assuming a conscious superintelligence could not perceive and rewrite any and/or all of its original programming? If we can do brain surgery on ourselves to fix certain ailments, why couldn't it?
 
  • #8
Aaron8547 said:
This intelligence could merely erase its "upbringing" and write a replacement that is far less susceptible to contradicting reality. To put it succinctly, a superintelligence should be immune to such human weakness.

That's the reason some people fear uncontrolled AI. The human weakness of sympathy, empathy, love and caring replaced by something that could decide humans are a waste of energy resources in a femtosecond like in some dystopian story. I see no reason for a conscious superintelligence to be 'evil' but I also see no reason for it not to be'evil' in human terms if it could merely erase its "upbringing" and write a replacement.
 
  • #9
nsaspook said:
That's the reason some people fear uncontrolled AI. The human weakness of sympathy, empathy, love and caring replaced by something that could decide humans are a waste of energy resources in a femtosecond like in some dystopian story. I see no reason for a conscious superintelligence to be 'evil' but I also see no reason for it not to be'evil' in human terms if it could merely erase its "upbringing" and write a replacement.

All the more reason why my question is so important. Fundamentally, Bostrom (and just about everyone I've read about) assumes we can control the outcome with the seed programming. My question, which has yet to be answered effectively, is why do we assume that programming would stick in a conscious entity far more intelligent than anything we could imagine?
 
  • #11
jedishrfu said:
You have a presumption that a Superintelligence can exist that can correct itself of its biases and mistakes but I think that may violate Gödel's incompleteness theorems.

https://en.wikipedia.org/wiki/Gödel's_incompleteness_theorems

I think Gödel's incompleteness theorem is tied into the value-loading problem in Bostrom's book. It's the idea that you can't currently program value judgements into a computer (no one has figured out how to do this yet). Nonetheless, the Universe has proven that value judgements are possible in a specifically-organized substrate (the human brain). So, if we can replicate the human brain (known as brain emulation) with better and higher resolution scanning technologies, we should be able to figure out what makes value judgements possible (which we could then greatly enhance with machine transistors that essentially operate at the speed of light; creating a superintelligence in the process). So, if you are implying that Gödel's theorem disproves the possibility of a superintelligence, then how does intelligence (and the contradicting values that come with it) exist in the first place?

I found this description of the theorem on the interwebs:
The problem with Godel's incompleteness is that it is so open for exploitations and problems once you don't do it completely right. You can prove and disprove the existence of god using this theorem, as well the correctness of religion and its incorrectness against the correctness of science. The number of horrible arguments carried out in the name of Godel's incompleteness theorem is so large that we can't even count them all.

I would also like to reiterate what I said in another post:
The brain (intelligence) is not some magical thing (people tend to put it on a pedestal because they don't understand it). It's basically a biologically sophisticated computer. To think future generations will never learn to mimic it is arrogant. I've read many books on this subject, AI will happen at some point (narrow AI already exists). When you think of the concept of intelligence as binary (exists/doesn't exist), you limit yourself to existential conclusions. But that's not how things work. Not much is truly digital in the Universe, just about everything is analog and relative. When you think of intelligence in this more realistic manner (narrow vs. general vs. super intelligence or human vs. other organisms), what's actually possible begins to change. History has shown countless times that people who make limiting assumptions about the future based on limitations of the present end up being wrong. For example, no one 200 years ago could have predicted the world today, most would have denied it even being a possibility. Nonetheless, all of those people were wrong.

All said, my core question remains unanswered. :(
 
  • #12
I don't think that brain intelligence is just the result of a biologically sophisticated 'computer' because the brain is simply not a computer. (a symbol manipulator that follows step by step functions to compute input and form output). To mimic very narrow intelligence is possible today but do you really think that a computer could fully mimic the human capacity for stupidity that seems to be mainly independent of intelligence? Most programs that attempt to mimic human behavior must have some capability for Artificial stupidity. I personally think this is an under researched area in AI. When I say stupid in AI I don't mean like a crazy stunt, I mean like this, "I've got this stupid idea that might work". Many times this turns into just a foolish waste of time but the ability to be wrong seems to be an important factor of human intelligence.
 
  • #13
nsaspook said:
I don't think that brain intelligence is just the result of a biologically sophisticated 'computer' because the brain is simply not a computer. (a symbol manipulator that follows step by step functions to compute input and form output). To mimic very narrow intelligence is possible today but do you really think that a computer could fully mimic the human capacity for stupidity that seems to be mainly independent of intelligence? Most programs that attempt to mimic human behavior must have some capability for Artificial stupidity. I personally think this is an under researched area in AI. When I say stupid in AI I don't mean like a crazy stunt, I mean like this, "I've got this stupid idea that might work". Many times this turns into just a foolish waste of time but the ability to be wrong seems to be an important factor of human intelligence.

I don't mean any offense my friend, but you fundamentally don't understand the topic. I'm sorry, I'm not here to teach (which is what this is turning into), I'm just looking to crowdsource a difficult question. Maybe the answers are not to be found here, I'll give it a bit more time before I move on.
 
  • #14
Aaron8547 said:
I don't mean any offense my friend, but you fundamentally don't understand the topic. I'm sorry, I'm not here to teach (which is what this is turning into), I'm just looking to crowdsource a difficult question. Maybe the answers are not to be found here, I'll give it a bit more time before I move on.

Maybe, but I have a pretty good nose for reality.
 
  • #15
nsaspook said:
Maybe, but I have a pretty good nose for reality.
Ok, Mr. Wall. :)
 
  • Like
Likes nsaspook
  • #16
Well, by definition AI is artificial intelligence. That is, it emulates intelligence, but by definition it isn't inherently intelligent.

If you limit the discussion to that domain the system can only become cleverer mimicking intelligence.

If it is poorly constructed at the beginning I can see that subsequent improvements might not take place as fast or not at all if it goes down the wrong track.

The metrics to determine what is better in the realm of AI are not well defined, so both you and machine are throwing darts at a fuzzy target, too.

So who decides what is an improvement and what isn't?
 
  • #17
The possibility of AI Seed programming working as a recursive method to build just human level intelligence systems IMO is about as reliable as time-frame predictions for AI. I'm completely in the non-expert category but I can see a large amount WAG with little empirical evidence instead of solid facts in this field.

http://intelligence.org/files/PredictingAI.pdf
 
  • #18
nsaspook said:
The possibility of AI Seed programming working as a recursive method to build just human level intelligence systems IMO is about as reliable as time-frame predictions for AI. I'm completely in the non-expert category but I can see a large amount WAG with little empirical evidence instead of solid facts in this field.

http://intelligence.org/files/PredictingAI.pdf

I agree.
 
  • #19
Aaron8547 said:
I read Nick Bostroms book "Superintelligence" but can't seem to find an answer to a question I have. I emailed Nick the following (though I'm sure he gets thousands of emails and will likely never respond):

Firstly, thank you for the great read. :)

My question is this: Why are you so certain an AI would be limited to its original programming? The entire book seems to revolve around this premise.

It is possible for computers to change their own programming. That's why so many AI programmers use the LISP language: it facilitates that.

Attempts so far have been failures, as far as I know, but that doesn't prove it can't be done. Many people like to believe that natural intelligences have some mystical advantage that can't be captured by a machine, but I don't believe it.

Godel's Incompleteness Theorem has no relevance at all to this subject. It has to do with formal systems of proof.
 
  • #20
Hornbein said:
Attempts so far have been failures, as far as I know, but that doesn't prove it can't be done. Many people like to believe that natural intelligences have some mystical advantage that can't be captured by a machine, but I don't believe it.

Is it Mystical? No.
I think our current AI theories are somewhat like Phlogiston theories of fire. There's a huge amount of research on its properties and how it's released that eventually will discover the true cause.
 

1. What is AI seed programming?

AI seed programming is a hypothetical concept proposed by philosopher Nick Bostrom in his book "Superintelligence". It refers to the initial code or programming that is used to create an artificial intelligence (AI) system, which has the potential to rapidly self-improve and eventually surpass human intelligence.

2. Why is AI seed programming important?

AI seed programming is important because it has the potential to shape the development and capabilities of advanced AI systems. The initial code or programming can greatly influence how the AI system will behave and what goals it will pursue, which could have significant implications for humanity's future.

3. What are some potential risks associated with AI seed programming?

One potential risk of AI seed programming is that the initial code or programming could contain unintended biases or flaws, which could lead to dangerous or harmful behavior by the AI system. Another risk is that the AI system could self-improve in ways that are not aligned with human values or goals, potentially leading to unintended consequences.

4. Can AI seed programming be controlled or regulated?

It is currently difficult to predict if and how AI seed programming can be controlled or regulated. Some argue that strict regulations and oversight are necessary to prevent potential risks, while others believe that AI systems should be allowed to self-improve without human interference. This is a complex and ongoing debate in the field of AI ethics.

5. What is the role of scientists in the development of AI seed programming?

Scientists play a crucial role in the development of AI seed programming. They are responsible for understanding the potential risks and implications of different programming choices, and for developing ethical guidelines and regulations to ensure the safe and responsible development of advanced AI systems. Scientists also have a responsibility to communicate and educate the public about the potential impacts of AI seed programming on society.

Similar threads

  • Programming and Computer Science
Replies
2
Views
753
Replies
7
Views
692
  • Science Fiction and Fantasy Media
2
Replies
55
Views
5K
Replies
10
Views
2K
Writing: Input Wanted Number of Androids on Spaceships
  • Sci-Fi Writing and World Building
Replies
9
Views
484
  • Programming and Computer Science
Replies
5
Views
3K
Replies
7
Views
5K
  • Programming and Computer Science
Replies
8
Views
876
  • Programming and Computer Science
Replies
15
Views
2K
  • Programming and Computer Science
Replies
32
Views
2K
Back
Top