Is a Technological Singularity Near, and How Can We Prepare for Its Impact?

  • Thread starter Ontoplankton
  • Start date
  • Tags
    Singularity
In summary, the conversation discusses the possibility of a technological singularity, where superhuman intelligence is created through technological means. There are differing opinions on the likelihood and the time scale of this event, as well as its potential consequences for science, technology, and life in general. Some argue for the importance of designing AI with benevolence toward sentient life in mind, while others suggest that the complexity of the subject may limit our ability to achieve a singularity in the near future. Ultimately, the conversation highlights the need for careful consideration and ethical guidelines in the development of AI.
  • #1
Ontoplankton
152
0
What are your thoughts on the future possibility of a technological singularity -- the creation of superhuman intelligence through technological means (such as artificial intelligence or augmentation of human brains)? This is discussed for example in http://www.kurzweilai.net/articles/art0585.html?m=1 .

How likely do you think it is that such an event will occur, and if it does, on what sort of time scale? In the interview, Kaku argues (rightly) that Moore's Law will get into trouble in 15-20 years or so, because of quantum effects. He argues that this should give us some breathing space until we need to worry about the intelligence of machines surpassing that of humans. Leaving aside that it's always a good idea to worry about future existential threats far in advance, I have a few other problems with this view -- for example, might 15 to 20 years of accelerating progress not already be enough for the creation of artificial general intelligence or the augmentation of existing human intelligence? And isn't there a good chance that at least one of the other technologies mentioned -- quantum computing, DNA computing, molecular nanotech, and so on -- will take over or even improve on Moore's Law?

What consequences would a technological singularity have for science, technology and life in general? I think these would be profound -- it's sometimes said that an intelligence that creates an even better intelligence is the last invention we would ever need to make.

What do you think can be done to ensure that if it happens, it will be beneficial rather than disastrous? Kaku mentions building in chips to shut robots off "when they start having murderous thoughts". I don't think this will do, at least not when they become truly intelligent and start designing even more intelligent versions of themselves. An AI need not have murderous thoughts to be dangerous -- it will very probably not even have traits such as aggression and egoism unless we build these in. Once an AI becomes sufficiently intelligent and complex, though, anything it decides to do could have negative consequences to humans, who may just be perceived as obstacles. A chip of the kind Kaku mentions would have to be able to recognize any thoughts that implicitly involve harming humans, even when the AI is trying to hide these thoughts, even if it turns superhumanly intelligent. To ensure that the AI doesn't behave in any way humans consider malevolent or amoral, such a chip would practically need to be a benevolent AI in itself.

Which leads one to the question: why not design an AI with benevolence toward sentient life in mind in the first place, rather than assume it will be hostile and work against it? Unlike what we're used to in humans, there is no reason to suppose an AI will develop its own agenda. An approach based on designing an AI to hold the moral views we do, for the same reasons we do -- or ultimately, views that we would like even better if we knew the reasons -- has the advantage that such an intelligence would not only not be hostile to us, but would actually want to help us. There is, I think, much that a transhumanly intelligent being could do to help solve human problems. Also, there would be no danger of the safety device (a chip, or pulling the plug) failing if the AI was designed not to have (or to want to have) any hostile intentions anyway. Therefore, I think this is both the most useful and the safest approach.

Such an approach is advocated by the Singularity Institute for Artificial Intelligence to create what they call "Friendly AI" (http://www.singinst.org), and is also defended by Nick Bostrom in a recent paper on AI ethics (http://www.nickbostrom.com/ethics/ai.html). I think it offers the best chance for humanity to make it through intact, if scientific and technological advances will indeed make the future as turbulent as some predict.

Any opinions on this are appreciated.
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
I think Kaku is being a bit optimistic. Quantum Computing and continuing studies in the architecture of the brain will lead to tremendous advances within the next twenty years, but the mathematics and science of networks is proving much more difficult to analyze. Already the indications are that Quantum Networks will have abilities beyond our classical ones, which suggests that if a singularity is possible we must first master such mathematics. In all likelihood, as is usually the case with technological developments, this will be done in stages. Rather than technological leaps based on a few basic discoveries, what usually occurs is a large series of smaller developments that come together.

Currently progress in AI research resembles that of the development of steam engines two hundred years ago. Rather than physics and other pure research efforts leading the way, the engineers virtually perfected the steam engine before the experts could explain how it works. The same may hold true throughout the development of AI. The brute force and intuitive approaches are proving the most fruitful today in no small part due to the complexity of the subject.

It is difficult to overstate the complexity of this. The natural temptation is to assume that we are simply missing a major part of the puzzle and/or a number of lesser ones. The reality, however, is that some times for reasons unknown things are simply beyond our capacity for the indefinite future. Cave man could not invent the steam engine, no matter how much time and effort they put into it.

That said, it is only within recent decades that the behavioral and cognitive sciences have been reconciled, that the fields of mathematics and physics have begun to converge, and that a serious understanding of how the brain functions has been accomplished. No matter how brute force and intuitive our approach, under the circumstances it might well require a century or more of further developments before the science becomes mature. If for no other reason, then because humanity has a definite speed with which it can absorb each new development, and then progress towards the next.
 
  • #3
I think it's a race between the technological singularity and the biological one. "The question is", said Humpty Dumpty, "Which is to be master?" Will intelligent machines swamp us, or will genetic engineering produce a post-human race that can meet them on their own terms?
 
  • #4
Wuliheron

AI does seem to be a pretty difficult problem, and its difficulty has often been underestimated in the past. On the other hand, it can only become easier due to increased computing power, increased knowledge of how the human brain is organized, and perhaps (biotechnological? cybernetic?) intelligence enhancement in humans.

Given enough advances in computing (say, if molecular nanotechnology is made to work or if it becomes possible to build and apply quantum computers), brute-forcing general AI could become possible (this is likely to be very dangerous, since it means AI could be created in thoughtless and unintelligent ways).


selfAdjoint

Why need it be humans (or biological posthumans) versus machines? I could imagine many different sorts of motivations being created in artificial intelligences, including ones very altruistic toward humans (probably more so than humans could be).

If it does come to such a confrontation (probably not due to aggressive or egoistic tendencies of machines, but rather due to their being amoral and other beings standing in the way to their strange goals), then I suspect humans or biotechnologically upgraded posthumans might not stand much of a chance -- too many constraints on brain and body. For example, neurons are really quite slow.

That seems to me to make it all the more important to create AIs to be friendly towards humans, and with the capacity for moral growth. I think it can be done -- it just won't happen by itself.
 

What is Technological Singularity?

Technological Singularity is the hypothetical future event in which artificial intelligence (AI) and other technologies advance to the point of surpassing human intelligence. This could lead to drastic changes in society and raise ethical concerns.

When will Technological Singularity occur?

There is no definite answer to this question as it is purely speculative. Some experts predict it could happen within the next few decades, while others argue it may never occur. The rate of technological advancement and the development of AI will ultimately determine when or if it happens.

What are the potential consequences of Technological Singularity?

Potential consequences of Technological Singularity include the development of superintelligent AI that could outperform humans in all intellectual tasks, leading to job displacement and economic disruptions. It could also raise ethical concerns about the control and potential dangers of advanced AI.

How can we prepare for Technological Singularity?

Preparing for Technological Singularity involves understanding and monitoring advancements in AI and other technologies, as well as exploring potential solutions to address any negative consequences. This may include developing ethical guidelines and regulations for AI development and investing in education and training to adapt to a changing workforce.

Is Technological Singularity inevitable?

There is no way to definitively answer this question, as it is based on speculation and predictions about the future. Some experts argue that the pace of technological advancement will inevitably lead to Technological Singularity, while others believe that human intervention can prevent or delay it from occurring.

Similar threads

Replies
3
Views
1K
Replies
10
Views
2K
  • Computing and Technology
Replies
2
Views
1K
Writing: Input Wanted Number of Androids on Spaceships
  • Sci-Fi Writing and World Building
Replies
9
Views
447
  • Sci-Fi Writing and World Building
Replies
3
Views
1K
  • General Discussion
Replies
4
Views
825
  • Engineering and Comp Sci Homework Help
Replies
3
Views
163
  • Computing and Technology
Replies
1
Views
1K
  • Science Fiction and Fantasy Media
2
Replies
44
Views
5K
Replies
4
Views
1K
Back
Top