Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Technological Singularity

  1. Sep 5, 2003 #1
    What are your thoughts on the future possibility of a technological singularity -- the creation of superhuman intelligence through technological means (such as artificial intelligence or augmentation of human brains)? This is discussed for example in http://www.kurzweilai.net/articles/art0585.html?m=1 [Broken].

    How likely do you think it is that such an event will occur, and if it does, on what sort of time scale? In the interview, Kaku argues (rightly) that Moore's Law will get into trouble in 15-20 years or so, because of quantum effects. He argues that this should give us some breathing space until we need to worry about the intelligence of machines surpassing that of humans. Leaving aside that it's always a good idea to worry about future existential threats far in advance, I have a few other problems with this view -- for example, might 15 to 20 years of accelerating progress not already be enough for the creation of artificial general intelligence or the augmentation of existing human intelligence? And isn't there a good chance that at least one of the other technologies mentioned -- quantum computing, DNA computing, molecular nanotech, and so on -- will take over or even improve on Moore's Law?

    What consequences would a technological singularity have for science, technology and life in general? I think these would be profound -- it's sometimes said that an intelligence that creates an even better intelligence is the last invention we would ever need to make.

    What do you think can be done to ensure that if it happens, it will be beneficial rather than disastrous? Kaku mentions building in chips to shut robots off "when they start having murderous thoughts". I don't think this will do, at least not when they become truly intelligent and start designing even more intelligent versions of themselves. An AI need not have murderous thoughts to be dangerous -- it will very probably not even have traits such as aggression and egoism unless we build these in. Once an AI becomes sufficiently intelligent and complex, though, anything it decides to do could have negative consequences to humans, who may just be perceived as obstacles. A chip of the kind Kaku mentions would have to be able to recognize any thoughts that implicitly involve harming humans, even when the AI is trying to hide these thoughts, even if it turns superhumanly intelligent. To ensure that the AI doesn't behave in any way humans consider malevolent or amoral, such a chip would practically need to be a benevolent AI in itself.

    Which leads one to the question: why not design an AI with benevolence toward sentient life in mind in the first place, rather than assume it will be hostile and work against it? Unlike what we're used to in humans, there is no reason to suppose an AI will develop its own agenda. An approach based on designing an AI to hold the moral views we do, for the same reasons we do -- or ultimately, views that we would like even better if we knew the reasons -- has the advantage that such an intelligence would not only not be hostile to us, but would actually want to help us. There is, I think, much that a transhumanly intelligent being could do to help solve human problems. Also, there would be no danger of the safety device (a chip, or pulling the plug) failing if the AI was designed not to have (or to want to have) any hostile intentions anyway. Therefore, I think this is both the most useful and the safest approach.

    Such an approach is advocated by the Singularity Institute for Artificial Intelligence to create what they call "Friendly AI" (http://www.singinst.org), and is also defended by Nick Bostrom in a recent paper on AI ethics (http://www.nickbostrom.com/ethics/ai.html). I think it offers the best chance for humanity to make it through intact, if scientific and technological advances will indeed make the future as turbulent as some predict.

    Any opinions on this are appreciated.
    Last edited by a moderator: May 1, 2017
  2. jcsd
  3. Sep 6, 2003 #2
    I think Kaku is being a bit optimistic. Quantum Computing and continuing studies in the architecture of the brain will lead to tremendous advances within the next twenty years, but the mathematics and science of networks is proving much more difficult to analyze. Already the indications are that Quantum Networks will have abilities beyond our classical ones, which suggests that if a singularity is possible we must first master such mathematics. In all likelihood, as is usually the case with technological developments, this will be done in stages. Rather than technological leaps based on a few basic discoveries, what usually occurs is a large series of smaller developments that come together.

    Currently progress in AI research resembles that of the development of steam engines two hundred years ago. Rather than physics and other pure research efforts leading the way, the engineers virtually perfected the steam engine before the experts could explain how it works. The same may hold true throughout the development of AI. The brute force and intuitive approaches are proving the most fruitful today in no small part due to the complexity of the subject.

    It is difficult to overstate the complexity of this. The natural temptation is to assume that we are simply missing a major part of the puzzle and/or a number of lesser ones. The reality, however, is that some times for reasons unknown things are simply beyond our capacity for the indefinite future. Cave man could not invent the steam engine, no matter how much time and effort they put into it.

    That said, it is only within recent decades that the behavioral and cognitive sciences have been reconciled, that the fields of mathematics and physics have begun to converge, and that a serious understanding of how the brain functions has been accomplished. No matter how brute force and intuitive our approach, under the circumstances it might well require a century or more of further developments before the science becomes mature. If for no other reason, then because humanity has a definite speed with which it can absorb each new development, and then progress towards the next.
  4. Sep 6, 2003 #3


    User Avatar
    Staff Emeritus
    Gold Member
    Dearly Missed

    I think it's a race between the technological singularity and the biological one. "The question is", said Humpty Dumpty, "Which is to be master?" Will intelligent machines swamp us, or will genetic engineering produce a post-human race that can meet them on their own terms?
  5. Sep 9, 2003 #4

    AI does seem to be a pretty difficult problem, and its difficulty has often been underestimated in the past. On the other hand, it can only become easier due to increased computing power, increased knowledge of how the human brain is organized, and perhaps (biotechnological? cybernetic?) intelligence enhancement in humans.

    Given enough advances in computing (say, if molecular nanotechnology is made to work or if it becomes possible to build and apply quantum computers), brute-forcing general AI could become possible (this is likely to be very dangerous, since it means AI could be created in thoughtless and unintelligent ways).


    Why need it be humans (or biological posthumans) versus machines? I could imagine many different sorts of motivations being created in artificial intelligences, including ones very altruistic toward humans (probably more so than humans could be).

    If it does come to such a confrontation (probably not due to aggressive or egoistic tendencies of machines, but rather due to their being amoral and other beings standing in the way to their strange goals), then I suspect humans or biotechnologically upgraded posthumans might not stand much of a chance -- too many constraints on brain and body. For example, neurons are really quite slow.

    That seems to me to make it all the more important to create AIs to be friendly towards humans, and with the capacity for moral growth. I think it can be done -- it just won't happen by itself.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook