Would artificial intelligence and singularity mean the end of humanity?

AI Thread Summary
The discussion centers on the implications of advanced artificial intelligence (AI) potentially surpassing human intelligence. Key points include the concern over whether such a machine could create even more intelligent machines indefinitely and the ethical considerations of developing AI that could be evolutionarily superior to humans. Experts like Ray Kurzweil predict significant advancements in AI within the next few decades, but skepticism exists regarding the feasibility and timeline of such developments. Participants debate the definitions of intelligence versus capability, emphasizing that current AI operates on algorithms without true self-awareness or free will. The conversation also touches on the speculative nature of predicting the motives of future intelligences and the philosophical implications of merging human consciousness with machines. Additionally, there are discussions about the potential for virtual time travel through advanced AI, though skepticism remains about the practicality of such concepts. Overall, the thread reflects a mix of fascination and caution regarding the future of AI and its impact on humanity.
gravenewworld
Messages
1,128
Reaction score
27
If you haven't seen the TIME article:

http://www.time.com/time/magazine/article/0,9171,2048299,00.html

So what exactly would happen if AI became more intelligent than any human that could ever exist? What would stop such a machine from creating another machine more intelligent than itself ad infinitum? Would such a machine even want to compete with humans for resources? Should we even be messing around with this of thing? If we could combine our consciousness with a super computer would it make us immortal and virtually make religion dead since death would be pretty much cured?

It may sound like science fiction, but experts like Kurtzweil see it as a reality that's about to happen in 30 years. Why would humans even want to introduce something in existence that would be more evolutionarily superior than themselves?
 
Last edited:
Physics news on Phys.org
Ray Kurtzweil is not an expert. He has no knowledge of any of the subjects he speaks on, especially biology! This conversation is a non-starter for me, it might as well be a discussion on religion.

Also trying to second guess the motives of an intelligence that does not even exist yet is a pointless exercise, especially if you add the quasi-religious ideology that these AIs will be Gods and that beyond the singularity nothing can be predicted.
 
ryan_m_b said:
Ray Kurtzweil is not an expert. He has no knowledge of any of the subjects he speaks on, especially biology! This conversation is a non-starter for me, it might as well be a discussion on religion.

Also trying to second guess the motives of an intelligence that does not even exist yet is a pointless exercise, especially if you add the quasi-religious ideology that these AIs will be Gods and that beyond the singularity nothing can be predicted.

Ok well let's ask 2 questions here:

1.) is AI possible?

2.) could AI (if it is possible) become smarter than any human?If the possibility exists that the answer is yes to those two questions, then I don't see what would stop such a machine from creating another machine or intelligence that is smarter than itself.
 
Last edited:
gravenewworld said:
Ok well let's ask 2 questions here:

1.) is AI possible?

I would say that the fact intelligence is possible is enough. Whether or not we could create a digital version, on demand whilst avoiding the ethical problems is another (and for the moment irrelevant) matter.

EDIT: Also we keep using the term "intelligence" but that's not what we are talking about is it? We want an Artificial General Intelligence AKA Strong AI. Therefore it has to not only be intelligent but sentient, adaptable and possibly conscious.
gravenewworld said:
2.) could AI become smarter than any human?

Most likely if only because it has the advantage of perfect memory, easy reprogramming and potentially faster running.

EDIT2: It also has the advantage that you could spend hundreds of thousands of Euros/Pounds/Dollars etc training it to get multiple degrees and PhDs in a subject and then you could just copy and paste it on command.
gravenewworld said:
If the possibility exists that the answer is yes to those two questions, then I don't see what would stop such a machine from creating another machine or intelligence that is smarter than itself.

Perhaps there are limits we are unaware of. Perhaps trying to adjust too much causes some sort of insanity. However I stick to the point I said above; trying to second guess the motives of an intelligent entity that does not exist yet is pointless.
 
Last edited:
A machine that is more intelligent than humans does not imply that it is self-aware or has free will.

Skippy
 
skippy1729 said:
A machine that is more intelligent than humans does not imply that it is self-aware or has free will.

Skippy

This is very, very true. Though if I could I would swap the term "intelligent" with "capable". It's entirely conceivable that we could create software capable of highly intelligent activity that doesn't require any sort of general intelligence/sapience.
 
I don't see a problem with the strong AI (lets suppose one day there is such thing), if its more adaptive than the humans, then its quite normal to come the end of homo sapiens. If you think, if we stop aging, we won't need strong AI for such a process, every next generation will at some point be the end for the previous. That's how things happen, show must go on. And who said we are not intelligent design ourselves?
 
russ_watters said:
...and there's also the opposable thumbs thing.

Not to mention the water thing. Always keep a bucket of water next to your bed in case of robot uprising.
 
  • #10
Ryan_m_b said:
Ray Kurtzweil is not an expert. He has no knowledge of any of the subjects he speaks on, especially biology! This conversation is a non-starter for me, it might as well be a discussion on religion.

Also trying to second guess the motives of an intelligence that does not even exist yet is a pointless exercise, especially if you add the quasi-religious ideology that these AIs will be Gods and that beyond the singularity nothing can be predicted.

Good post, my main problem with Kurtzweil is that not only he promotes such ideas but he insists they will happen in some few decades.
Anybody with any knowledge in things like AI, Biology or Nanotechnology knows we are not close to building anything he says. Like an army of nanobots that will fix everything, or that just improving computing power equals artificial intelligence(like, is my computer more intelligent than my calculator?).
And then he will say it's because of exponential growth and we humans cannot understand it.
 
  • #11
I'd be more concerned about natural stupidity.
 
  • #12
Ryan_m_b said:
This is very, very true. Though if I could I would swap the term "intelligent" with "capable". It's entirely conceivable that we could create software capable of highly intelligent activity that doesn't require any sort of general intelligence/sapience.

Then I wouldn't call such a machine intelligent, just call it a better adding machine. Even if you could add 100,000,000,000,000,000,000x as many numbers in a second as humans in a minute, unless you have some sort of ability of original thought - which no computer has the rudiments of - you are not truly "intelligent."
 
  • #13
AI is a fancy name for decision making based on preprogramed conditions.
Computer can't be intelligent they only execute algorithms...
 
  • #14
estro said:
Computer can't be intelligent they only execute algorithms...

If you claim that is the definition of "intelligence", then prove to me that you don't only execute algorithms.
 
  • #15
AlephZero said:
If you claim that is the definition of "intelligence", then prove to me that you don't only execute algorithms.

I never claimed that this is the "definition" for intelligence, I just pointed out 1 ingredient of it. [I don't posses the full definition =)]
However, I can prove that I don't follow algorithm:

Suppose I'm executing an algorithm.
When I see code of a computer program and look at a "while\if or combination of the two" loop I'm able to tell if the loop is infinity or not. This problem I just described is called "Halting Problem" however Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist.
Thus initial assumption is wrong, what in turn means I don't execute algorithm.

http://en.wikipedia.org/wiki/Halting_problem
 
Last edited:
  • #16
What makes you so sure it hasn't already happened? heh :)

Also stupid robots programmed to do harm are more likely to be what does humanity in then super smart ones. I mean just look at the drone planes the military has one person can probably fly like 20 of them at a time for all I know.
 
  • #17
Ryan_m_b said:
Not to mention the water thing. Always keep a bucket of water next to your bed in case of robot uprising.

there is more than one way to fight fires. i'll be carrying an axe.
 
  • #18
I was looking up AI and my mind began to wonder. Assuming that a future computer/AI has an incredible amount of storage and computational speed as well as algorithms that will allow it to manipulate information in every possible way. This computer would be tied into a network that would record all human activity and interface electronically with humans, my question is would time travel be doable at that point? Would anyone willing to travel back in time just plug in? Would there be a program running that forbids changing the past? At this point I start thinking that maybe we are a simulation after all. Where is the guy with the red and blue pills?
 
  • #19
leonstavros said:
I was looking up AI and my mind began to wonder. Assuming that a future computer/AI has an incredible amount of storage and computational speed as well as algorithms that will allow it to manipulate information in every possible way. This computer would be tied into a network that would record all human activity and interface electronically with humans, my question is would time travel be doable at that point? Would anyone willing to travel back in time just plug in? Would there be a program running that forbids changing the past? At this point I start thinking that maybe we are a simulation after all. Where is the guy with the red and blue pills?
(Emphasis mine)

You've lost me. How would strong AI observing everything happening lead to time travel?
 
  • #20
at some point in modern history, sci-fi writers had far outpaced the real pace of technology development with their writings and wishful thinkings, there are more fantasy elements than science.

keep on dreaming
 
  • #21
arabianights said:
at some point in modern history, sci-fi writers had far outpaced the real pace of technology development with their writings and wishful thinkings, there are more fantasy elements than science.

keep on dreaming
Is there a point in history where this was not the case??
 
  • #22
There will all ways be an off button some where.
 
  • #23
Ryan_m_b said:
(Emphasis mine)

You've lost me. How would strong AI observing everything happening lead to time travel?

It would be virtual time travel into a virtual reality composed of recorded events much like the holodeck in "star treck". Virtual time travel would be an outcome of extended recordings and reconstructed historical events.
 
  • #24
Inb4 Skynet reference.
 
  • #25
leonstavros said:
It would be virtual time travel into a virtual reality composed of recorded events much like the holodeck in "star treck". Virtual time travel would be an outcome of extended recordings and reconstructed historical events.
Right, so basically what you are asking is "if a strong AI entity was hooked up to a Big-Brother style surveillance system could we work out pretty much everything in history and simulate it for people to live in". My response would be a resounding no. Just because you can observe everything in no way means that you can simulate it perfectly nor that you could build some sort of VR for people to live in.
 
  • #26
The Turing test states if a human can not tell a human from a computer then as far as intelligence is concerned there's no difference between a human and a computer. If one can not tell a simulation from reality then there's no difference.
 
  • #27
leonstavros said:
The Turing test states if a human can not tell a human from a computer then as far as intelligence is concerned there's no difference between a human and a computer. If one can not tell a simulation from reality then there's no difference.
And the Chinese room thought experiment is a good example of how the Turing test is wrong on that count.

Regardless the problem with your proposal is not whether or not a faultless simulation is possible but why you would think that there is a logical chain between "AI observing everyone" and "matrix-style simulation."
 
  • #28
Seems like a good place to close.
 

Similar threads

Replies
26
Views
2K
Replies
40
Views
4K
Replies
19
Views
1K
Replies
77
Views
6K
Replies
40
Views
5K
Replies
36
Views
5K
Back
Top