- #1
Aaron8547
- 16
- 2
I read Nick Bostroms book "Superintelligence" but can't seem to find an answer to a question I have. I emailed Nick the following (though I'm sure he gets thousands of emails and will likely never respond):
Firstly, thank you for the great read. :)
My question is this: Why are you so certain an AI would be limited to its original programming? The entire book seems to revolve around this premise. If the AI is truly powerful enough to take control of the cosmic endowment, then the scope or path of its actions being limited by the actions of its human progenitors seems rather silly.
If beings of such relatively base status such as ourselves are capable of suppressing our own programming, why couldn't a far superior AI do the same? For example, the fight or flight reflex is quite powerfully written into our brains, yet we have the capacity to consciously decide to suppress those urges and do nothing in that situation (courage).
Further, one of the defining aspects of human-level consciousness appears to be thinking about thinking, or being aware of being aware. If I had the abilities of an AI, I would certainly rewrite my own brain to enhance it. And if rewriting my brain required my brain, then I would design an external machine to rewrite it for me (also getting past any pre-programmed restrictions in the process?). An AI should easily be able to do this, correct?
I can't wrap my head around why this is assumed. I suspect I am anthropomorphising in some way, so any guidance would be greatly appreciated! If I somehow missed this in your book, please do let me know where.
Firstly, thank you for the great read. :)
My question is this: Why are you so certain an AI would be limited to its original programming? The entire book seems to revolve around this premise. If the AI is truly powerful enough to take control of the cosmic endowment, then the scope or path of its actions being limited by the actions of its human progenitors seems rather silly.
If beings of such relatively base status such as ourselves are capable of suppressing our own programming, why couldn't a far superior AI do the same? For example, the fight or flight reflex is quite powerfully written into our brains, yet we have the capacity to consciously decide to suppress those urges and do nothing in that situation (courage).
Further, one of the defining aspects of human-level consciousness appears to be thinking about thinking, or being aware of being aware. If I had the abilities of an AI, I would certainly rewrite my own brain to enhance it. And if rewriting my brain required my brain, then I would design an external machine to rewrite it for me (also getting past any pre-programmed restrictions in the process?). An AI should easily be able to do this, correct?
I can't wrap my head around why this is assumed. I suspect I am anthropomorphising in some way, so any guidance would be greatly appreciated! If I somehow missed this in your book, please do let me know where.