Would artificial intelligence and singularity mean the end of humanity?

In summary: You're fast, but you're not actually intelligent.Not to mention the water thing. Always keep a bucket of water next to your bed in case of robot uprising.In summary, the conversation discusses the possibility of AI becoming more intelligent than humans and the potential implications of such a development. While experts like Ray Kurtzweil see this as a reality in the next 30 years, others argue that the motives and limitations of such a machine cannot be predicted. There is also a debate on whether AI would just be a more capable machine or if it would also possess self-awareness and free will. Some express concerns about the potential dangers of creating AI, while others
  • #1
gravenewworld
1,132
26
If you haven't seen the TIME article:

http://www.time.com/time/magazine/article/0,9171,2048299,00.html

So what exactly would happen if AI became more intelligent than any human that could ever exist? What would stop such a machine from creating another machine more intelligent than itself ad infinitum? Would such a machine even want to compete with humans for resources? Should we even be messing around with this of thing? If we could combine our consciousness with a super computer would it make us immortal and virtually make religion dead since death would be pretty much cured?

It may sound like science fiction, but experts like Kurtzweil see it as a reality that's about to happen in 30 years. Why would humans even want to introduce something in existence that would be more evolutionarily superior than themselves?
 
Last edited:
Physics news on Phys.org
  • #2
Ray Kurtzweil is not an expert. He has no knowledge of any of the subjects he speaks on, especially biology! This conversation is a non-starter for me, it might as well be a discussion on religion.

Also trying to second guess the motives of an intelligence that does not even exist yet is a pointless exercise, especially if you add the quasi-religious ideology that these AIs will be Gods and that beyond the singularity nothing can be predicted.
 
  • #3
ryan_m_b said:
Ray Kurtzweil is not an expert. He has no knowledge of any of the subjects he speaks on, especially biology! This conversation is a non-starter for me, it might as well be a discussion on religion.

Also trying to second guess the motives of an intelligence that does not even exist yet is a pointless exercise, especially if you add the quasi-religious ideology that these AIs will be Gods and that beyond the singularity nothing can be predicted.

Ok well let's ask 2 questions here:

1.) is AI possible?

2.) could AI (if it is possible) become smarter than any human?If the possibility exists that the answer is yes to those two questions, then I don't see what would stop such a machine from creating another machine or intelligence that is smarter than itself.
 
Last edited:
  • #4
gravenewworld said:
Ok well let's ask 2 questions here:

1.) is AI possible?

I would say that the fact intelligence is possible is enough. Whether or not we could create a digital version, on demand whilst avoiding the ethical problems is another (and for the moment irrelevant) matter.

EDIT: Also we keep using the term "intelligence" but that's not what we are talking about is it? We want an Artificial General Intelligence AKA Strong AI. Therefore it has to not only be intelligent but sentient, adaptable and possibly conscious.
gravenewworld said:
2.) could AI become smarter than any human?

Most likely if only because it has the advantage of perfect memory, easy reprogramming and potentially faster running.

EDIT2: It also has the advantage that you could spend hundreds of thousands of Euros/Pounds/Dollars etc training it to get multiple degrees and PhDs in a subject and then you could just copy and paste it on command.
gravenewworld said:
If the possibility exists that the answer is yes to those two questions, then I don't see what would stop such a machine from creating another machine or intelligence that is smarter than itself.

Perhaps there are limits we are unaware of. Perhaps trying to adjust too much causes some sort of insanity. However I stick to the point I said above; trying to second guess the motives of an intelligent entity that does not exist yet is pointless.
 
Last edited:
  • #5
A machine that is more intelligent than humans does not imply that it is self-aware or has free will.

Skippy
 
  • #6
skippy1729 said:
A machine that is more intelligent than humans does not imply that it is self-aware or has free will.

Skippy

This is very, very true. Though if I could I would swap the term "intelligent" with "capable". It's entirely conceivable that we could create software capable of highly intelligent activity that doesn't require any sort of general intelligence/sapience.
 
  • #7
...and there's also the opposable thumbs thing.
 
  • #8
I don't see a problem with the strong AI (lets suppose one day there is such thing), if its more adaptive than the humans, then its quite normal to come the end of homo sapiens. If you think, if we stop aging, we won't need strong AI for such a process, every next generation will at some point be the end for the previous. That's how things happen, show must go on. And who said we are not intelligent design ourselves?
 
  • #9
russ_watters said:
...and there's also the opposable thumbs thing.

Not to mention the water thing. Always keep a bucket of water next to your bed in case of robot uprising.
 
  • #10
Ryan_m_b said:
Ray Kurtzweil is not an expert. He has no knowledge of any of the subjects he speaks on, especially biology! This conversation is a non-starter for me, it might as well be a discussion on religion.

Also trying to second guess the motives of an intelligence that does not even exist yet is a pointless exercise, especially if you add the quasi-religious ideology that these AIs will be Gods and that beyond the singularity nothing can be predicted.

Good post, my main problem with Kurtzweil is that not only he promotes such ideas but he insists they will happen in some few decades.
Anybody with any knowledge in things like AI, Biology or Nanotechnology knows we are not close to building anything he says. Like an army of nanobots that will fix everything, or that just improving computing power equals artificial intelligence(like, is my computer more intelligent than my calculator?).
And then he will say it's because of exponential growth and we humans cannot understand it.
 
  • #11
I'd be more concerned about natural stupidity.
 
  • #12
Ryan_m_b said:
This is very, very true. Though if I could I would swap the term "intelligent" with "capable". It's entirely conceivable that we could create software capable of highly intelligent activity that doesn't require any sort of general intelligence/sapience.

Then I wouldn't call such a machine intelligent, just call it a better adding machine. Even if you could add 100,000,000,000,000,000,000x as many numbers in a second as humans in a minute, unless you have some sort of ability of original thought - which no computer has the rudiments of - you are not truly "intelligent."
 
  • #13
AI is a fancy name for decision making based on preprogramed conditions.
Computer can't be intelligent they only execute algorithms...
 
  • #14
estro said:
Computer can't be intelligent they only execute algorithms...

If you claim that is the definition of "intelligence", then prove to me that you don't only execute algorithms.
 
  • #15
AlephZero said:
If you claim that is the definition of "intelligence", then prove to me that you don't only execute algorithms.

I never claimed that this is the "definition" for intelligence, I just pointed out 1 ingredient of it. [I don't posses the full definition =)]
However, I can prove that I don't follow algorithm:

Suppose I'm executing an algorithm.
When I see code of a computer program and look at a "while\if or combination of the two" loop I'm able to tell if the loop is infinity or not. This problem I just described is called "Halting Problem" however Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist.
Thus initial assumption is wrong, what in turn means I don't execute algorithm.

http://en.wikipedia.org/wiki/Halting_problem
 
Last edited:
  • #16
What makes you so sure it hasn't already happened? heh :)

Also stupid robots programmed to do harm are more likely to be what does humanity in then super smart ones. I mean just look at the drone planes the military has one person can probably fly like 20 of them at a time for all I know.
 
  • #17
Ryan_m_b said:
Not to mention the water thing. Always keep a bucket of water next to your bed in case of robot uprising.

there is more than one way to fight fires. i'll be carrying an axe.
 
  • #18
I was looking up AI and my mind began to wonder. Assuming that a future computer/AI has an incredible amount of storage and computational speed as well as algorithms that will allow it to manipulate information in every possible way. This computer would be tied into a network that would record all human activity and interface electronically with humans, my question is would time travel be doable at that point? Would anyone willing to travel back in time just plug in? Would there be a program running that forbids changing the past? At this point I start thinking that maybe we are a simulation after all. Where is the guy with the red and blue pills?
 
  • #19
leonstavros said:
I was looking up AI and my mind began to wonder. Assuming that a future computer/AI has an incredible amount of storage and computational speed as well as algorithms that will allow it to manipulate information in every possible way. This computer would be tied into a network that would record all human activity and interface electronically with humans, my question is would time travel be doable at that point? Would anyone willing to travel back in time just plug in? Would there be a program running that forbids changing the past? At this point I start thinking that maybe we are a simulation after all. Where is the guy with the red and blue pills?
(Emphasis mine)

You've lost me. How would strong AI observing everything happening lead to time travel?
 
  • #20
at some point in modern history, sci-fi writers had far outpaced the real pace of technology development with their writings and wishful thinkings, there are more fantasy elements than science.

keep on dreaming
 
  • #21
arabianights said:
at some point in modern history, sci-fi writers had far outpaced the real pace of technology development with their writings and wishful thinkings, there are more fantasy elements than science.

keep on dreaming
Is there a point in history where this was not the case??
 
  • #22
There will all ways be an off button some where.
 
  • #23
Ryan_m_b said:
(Emphasis mine)

You've lost me. How would strong AI observing everything happening lead to time travel?

It would be virtual time travel into a virtual reality composed of recorded events much like the holodeck in "star treck". Virtual time travel would be an outcome of extended recordings and reconstructed historical events.
 
  • #24
Inb4 Skynet reference.
 
  • #25
leonstavros said:
It would be virtual time travel into a virtual reality composed of recorded events much like the holodeck in "star treck". Virtual time travel would be an outcome of extended recordings and reconstructed historical events.
Right, so basically what you are asking is "if a strong AI entity was hooked up to a Big-Brother style surveillance system could we work out pretty much everything in history and simulate it for people to live in". My response would be a resounding no. Just because you can observe everything in no way means that you can simulate it perfectly nor that you could build some sort of VR for people to live in.
 
  • #26
The Turing test states if a human can not tell a human from a computer then as far as intelligence is concerned there's no difference between a human and a computer. If one can not tell a simulation from reality then there's no difference.
 
  • #27
leonstavros said:
The Turing test states if a human can not tell a human from a computer then as far as intelligence is concerned there's no difference between a human and a computer. If one can not tell a simulation from reality then there's no difference.
And the Chinese room thought experiment is a good example of how the Turing test is wrong on that count.

Regardless the problem with your proposal is not whether or not a faultless simulation is possible but why you would think that there is a logical chain between "AI observing everyone" and "matrix-style simulation."
 
  • #28
Seems like a good place to close.
 

1. What is artificial intelligence (AI) and the singularity?

Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. The singularity, also known as the technological singularity, is a hypothetical event in which AI surpasses human intelligence and capabilities.

2. How could AI and the singularity lead to the end of humanity?

Some experts and futurists have raised concerns that as AI becomes more advanced and capable of self-improvement, it could potentially outsmart and overpower humans, leading to disastrous consequences for humanity.

3. Is there any evidence to support the idea that AI and the singularity could lead to the end of humanity?

While there have been many science fiction stories and theories surrounding the potential dangers of AI and the singularity, there is currently no concrete evidence to suggest that it will result in the end of humanity.

4. Are there any efforts being made to prevent the negative effects of AI and the singularity?

There are ongoing discussions and debates within the scientific and tech communities about how to safely and ethically develop and regulate AI. Some researchers and organizations are also actively working on developing safeguards and protocols to prevent potential negative outcomes.

5. Could AI and the singularity have positive impacts on humanity?

Yes, there are many potential benefits of AI and the singularity, such as improving efficiency, solving complex problems, and advancing scientific and medical research. It is important to continue researching and developing AI in a responsible and ethical manner to harness its potential for positive impact on humanity.

Similar threads

  • General Discussion
2
Replies
40
Views
2K
  • Computing and Technology
3
Replies
99
Views
4K
Replies
1
Views
1K
  • General Discussion
2
Replies
40
Views
4K
  • General Discussion
Replies
24
Views
1K
  • Biology and Medical
Replies
15
Views
1K
Replies
34
Views
9K
Replies
4
Views
2K
  • General Discussion
Replies
4
Views
584
Replies
10
Views
2K
Back
Top