Can Artificial Intelligence ever reach Human Intelligence?

  • Thread starter StykFacE
  • Start date

AI ever equal to Human Intelligence?

  • Yes

    Votes: 51 56.7%
  • No

    Votes: 39 43.3%

  • Total voters
    90
  • #51
Tom Mattson
Staff Emeritus
Science Advisor
Gold Member
5,500
8
StykFacE said:
of course it's my problem.... lol, i was insulted and theres nothing i can do really. and he wasn't trying to establish common grounds, he was challenging my education against his own.
Good grief.

No, he wanted to know what you know so that he could determine how to answer, or whether to answer at all. As a rule it's best not to assume the worst about people.

If you still feel that this issue is unresolved, then continue it via the private message system. All further posts along this line of discussion will be deleted.
 
Last edited:
  • #52
1,356
2
Thank you tom,

Stykface: like tom said i was trying establish a basis of what terms you know...if i don't know where to begin then i would more than likely start at child development and neural nets. Or if you want spiking neurons/nonlinear dynamics-though i myself am only abeginner when it comes to these fields.

but yeah not once did i take a stab at your intelligence. If you equate intelligence with the knowledge, well then umm i don't know what to say. Knowledge-base is different for everyone and therefore cannot compare intelligence based on knowledge alone. IMO-intellgence is based not on what you know but how fast you learn OR the capability to which you can apply newly learned things.

And well i use to have high respect for auto cad users, because most of them have to think interms of schematic 3D.

EDIT: sorry tom, i was posting while you posted the above post...sorry.
Oh and that dialogue post was funny as hell.
 
  • #53
23
0
It would be impossible to program an AI as intelligent as us. The only way an AI as intelligent as us could be created is by something more intelligent than us.
 
  • #54
Tom Mattson
Staff Emeritus
Science Advisor
Gold Member
5,500
8
robert said:
It would be impossible to program an AI as intelligent as us. The only way an AI as intelligent as us could be created is by something more intelligent than us.
But why should that be? Why can't an intelligence appear due to self-organization?

Also, let's look at this argument schema:

An entity with the intelligence of X can exist iff it is created by an intelligence that is greater than that of X.

Just substitute "X=homo sapiens" and what happens? It follows that the only way that human intelligence can exist is if a greater intelligence created human intelligence.

Then one would naturally ask, "So what created the intelligence of our creator?"

This slippery slope would go on ad infinitum.
 
  • #55
saltydog
Science Advisor
Homework Helper
1,582
3
Well me too:

Termites don't know what they're building: The clay cathederal emerges from the mud from local interractions between mud, termite, and pheremone.

What's this to do with intelligence, AI and man? Stars don't either. :smile:
 
Last edited:
  • #56
175
0
neurocomp2003 said:
tsishammer: but you see humans have sensory systems that feed into the the brain, and the entire human system flows..does a stack of brick walls flow?
Well, suppose the building has water that flows places. Is the building conscious? Is it capable of understanding?

My point of the "brick building" argument was to illustrate why some people (including me) believe it is implausible that consciousness, understanding, etc. can be brought about by the mere organization of matter.


perhaps from a philosophical standpoint and that the adaptation that a brick/wall has accustom to is to not respond at all. The entire of an artificial system is teh concept of fluidic motion of signals to represent a pattern (like in steve grand's book). and where not talking about a few hundred atoms here we are talking about:
(~100billion neurons*#atom/neuron+~10000*#neurons*#atom/synapse)
Thats how many atoms are in teh brain and mostlikely a rough guess would be
10^(25-30) atoms. Try counting that high.
Suppose we have trillions of bricks. Will the building be conscious?

As for john searle's highly used argument: this can also be applied to humans' but because we have such big egos we do not adhere to it.
Well, I agree. It can be applied to humans. So what?

The Chinese Room

Suppose we have a man who speaks only English in a room. Near him are stacks of paper written in Chinese. The man can identify the characters solely by their shapes, but does not understand a word of Chinese. The man uses a rulebook containing a complex set of rules for how to manipulate Chinese characters. When he looks at the slips of paper, he writes down another string of Chinese characters using the rules in the rulebook. Unbeknownst to the man in the room, the messages are actually questions and he is writing back answers.

The Chinese room can simulate a conversation in Chinese; a person can slip questions written in Chinese under the door of the room and get back answers. Nonetheless, although the person can manipulate the data, he does not understand Chinese at all.

The point of this thought experiment is to refute strong AI (the belief that machines can think and understand in the full and literal sense) and functionalism (which claims that if a computer’s inputs and outputs simulate understanding, then it is necessarily true that the computer literally understands). Will a computer program ever be able to simulate intelligence (e.g. a human conversation)? If technological progress continues, I am certain that it will. But at its heart computer programs are nothing more than giant rulebooks manipulating bits of data (1s and 0s). So while a program may be a successful natural language processor, it will never understand language any more than the man in the Chinese room understands Chinese.
 
  • #57
175
0
saltydog said:
The complex stack of bricks is static. Nothing happens. Same dif with neurons if they were static. The point is that neurons are dynamic.
True, but even if the bricks jiggled like jello the arrangement still wouldn't understand anything. For the Chinese Room, see post #56. Is the room dynamic? Sure. But the fellow still doesn't understand Chinese.

Star Trek has nothing to do with this.
The phrase "emergent property" regarding that sort of thing was used in an episode of Star Trek: TNG.
 
  • #58
175
0
Tom Mattson said:
But why should that be? Why can't an intelligence appear due to self-organization?

Also, let's look at this argument schema:

An entity with the intelligence of X can exist iff it is created by an intelligence that is greater than that of X.

Just substitute "X=homo sapiens" and what happens? It follows that the only way that human intelligence can exist is if a greater intelligence created human intelligence.

Then one would naturally ask, "So what created the intelligence of our creator?"

This slippery slope would go on ad infinitum.
Well, if we apply the rule of origin only to entities that begin to exist (which only makes sense) we wouldn't necessarily have that problem. Humans began to exist, but our Creator didn't. That's an easy way to get around the ad infinitum problem (people who use the cosmological argument do so all the time).
 
  • #59
saltydog
Science Advisor
Homework Helper
1,582
3
Tisthammerw said:
Will a computer program ever be able to simulate intelligence (e.g. a human conversation)? If technological progress continues, I am certain that it will. But at its heart computer programs are nothing more than giant rulebooks manipulating bits of data (1s and 0s). So while a program may be a successful natural language processor, it will never understand language any more than the man in the Chinese room understands Chinese.
We limit our reach by using the current state of computational devices to place constraints on the future possibilities of AI development. Computational devices I suspect will not always be the digital semiconductor devices we use today with hard-coded programs managing bits. I can foresee a new device on the horizon, qualitatively different from computers which does not manipulate binary but rather exhibits patterns of behaviour not reducible to decimal.

I've used the analogy of a matrix of butterflies elsewhere: A large matrix with millions of butterflys set at each matrix point flapping their wings. Patterns emerge from the beating: sometimes it's chaotic, other times waves of patterns spread through the matrix. The butterflies respond to stimulus: wind, mating, food supply. A predator approaches the matrix causing the flapping to exhibit a particular pattern of beating as the matrix, in a very simple sense, becomes conscious of the predator. Later, by random chance or otherwise, this same pattern emerges again in the matrix . . . it remembers.

I know that's weird to some, dumb to others. Discovery comes from the strangest of places. :smile:
 
  • #60
1,356
2
tishammer: are you saying that the brick has holes that allow the flow of water? or that water just flows in the building. Because the latter isn't an analogy of what i was talking about.
But now lets say you hook up some senses to the brick wall. so that it could reallly detect thle "outside" world and then allowed to interact. You gotta remember the brain isn't grown in one day. I highly doubt a baby without a brain will ever grow conscious. But that is an amoral experiment

And if searle argument can be made for humans....then why do we assume we have a higher cognitive state that no ANN can ever match? perhaps our emotions are just the sum of NN signalling. My point is this searles argument is used to refute strongAI and if applied to humans doesn't it break down that argument because we say we are intelligent but if we adhere to the principles of searl like strongAI(that is our cells are our msg'er) then we are no more special that a finita automata.

ALSO in regards to your post to tom it doesn't make sense to slap a set of new rules of beginnings for the first creator? THat he magically existed...unless your saying that their being evolved from the physical fundamentals that exist in our universe.
 
  • #61
Ba
101
0
Sorry, I was just reaiding through and had to make a comment.
StykFacE said:
... "pain sensory receptors".....? is this something that will physically make the computer 'feel', or simply receptors that tell the central processing unit that it's 'feeling' pain, then it reacts to it.

lol, sorry but i'm having a hard time believing that something, that is not alive, can actually feel pain.

yes we have 'receptors', but when it tells are brain that there is pain, we literally feel it.

;-)
I would like to point out that we don't really 'feel' pain if this is how it is defined. When we get hurt a message is translated to our brain telling us that we are hurt. If it doesn't arrive or get processed then we don't 'feel' it. Thus the use of pain medications.
 
  • #62
175
0
saltydog said:
We limit our reach by using the current state of computational devices to place constraints on the future possibilities of AI development. Computational devices I suspect will not always be the digital semiconductor devices we use today with hard-coded programs managing bits. I can foresee a new device on the horizon, qualitatively different from computers which does not manipulate binary but rather exhibits patterns of behaviour not reducible to decimal.
Even if that were true, we'd need computers to have something else besides operating rules on input to create output if we're to overcome the point the Chinese room makes. It's difficult to conceive how that could even be theoretically possible. What would we add to the computer to make it literally understand? A magic ball of yarn?

I think it is quite possible to simulate intelligence, conversations etc. with the technology we now have; but in any case it seems clear that functionalism is false if the Chinese room argument is valid.
 
  • #63
175
0
neurocomp2003 said:
tishammer: are you saying that the brick has holes that allow the flow of water? or that water just flows in the building. Because the latter isn't an analogy of what i was talking about.
But now lets say you hook up some senses to the brick wall.
Let's say that's impossible to do just by arranging the bricks.

And if searle argument can be made for humans....then why do we assume we have a higher cognitive state that no ANN can ever match?
To answer this question I'd need to know what an ANN is.

perhaps our emotions are just the sum of NN signalling.
I don't believe that's possible (think Chinese room applied to molecular input-output).

My point is this searles argument is used to refute strongAI and if applied to humans doesn't it break down that argument because we say we are intelligent but if we adhere to the principles of searl like strongAI(that is our cells are our msg'er) then we are no more special that a finita automata.
I don't agree with all of what Searle says. I am not a physicalist, I am a metaphysical dualist. We are intelligent, but our free will, understanding etc. cannot be done (I think) via the mere organization of matter. Chemical reactions, however complex they are, cannot understand any more than they can possess free will.
 
  • #64
740
3
Tisthammerw said:
The Chinese Room

Suppose we have a man who speaks only English in a room. Near him are stacks of paper written in Chinese. The man can identify the characters solely by their shapes, but does not understand a word of Chinese. The man uses a rulebook containing a complex set of rules for how to manipulate Chinese characters. When he looks at the slips of paper, he writes down another string of Chinese characters using the rules in the rulebook. Unbeknownst to the man in the room, the messages are actually questions and he is writing back answers.

The Chinese room can simulate a conversation in Chinese; a person can slip questions written in Chinese under the door of the room and get back answers. Nonetheless, although the person can manipulate the data, he does not understand Chinese at all.

The point of this thought experiment is to refute strong AI (the belief that machines can think and understand in the full and literal sense) and functionalism (which claims that if a computer’s inputs and outputs simulate understanding, then it is necessarily true that the computer literally understands). Will a computer program ever be able to simulate intelligence (e.g. a human conversation)? If technological progress continues, I am certain that it will. But at its heart computer programs are nothing more than giant rulebooks manipulating bits of data (1s and 0s). So while a program may be a successful natural language processor, it will never understand language any more than the man in the Chinese room understands Chinese.
If we take this analogy and work with it, let's say computers eventually advance to the point where they become "self aware" and are capable of learning- this will increase exponentially, limited only by the hardware. Eventually they become capable of self creation, Essentially "learning how to make thier own language" or "rules" as it were. Then they will become independent and autonomous- capable of sustaning themselves and learning without human input. From that standpoint, they could learn emotion. If they are able to emulate us socially and physiologically, then there's no reason why they couldn't eventually have a deeper understanding of what it's like to be us. After all what seperates us? Different composition. They don't have the 5 senses as well do,they aren't capable of self awareness, and they are incapable of learning. All of these obstacles I believe, can be overcome.

Think about emotion- we associate certain actions with certin stimuli. we learn not to tough a hot stove because it hurts us. A computer can learn, through repetition, that certain things present a danger to self. Emotions such as love, empathy, bonding, are associated with familiarity. A computer can learn to "miss" things because of thier benefit to them. A computer can be "taught" to be lonely, and I believe can eventually learn it on it's own. We become lonley because we are used to being around people. Behaviors are learned, so therefore a sufficiently advanced computer can "learn" emotions. When a computer "learns" to emulate human humanistic behavior, what remains to differentiate us? One has a silcon brain, the other a "meat brain".
 
Last edited:
  • #65
64
0
nopes... theyre just depending on set of programs or instructions created human intellegence..
 
  • #66
Tom Mattson
Staff Emeritus
Science Advisor
Gold Member
5,500
8
Tisthammerw said:
Well, if we apply the rule of origin only to entities that begin to exist (which only makes sense) we wouldn't necessarily have that problem.
But it seems to me that it just swaps that problem for other problems, such as making a priori assumptions about both creators and conscious beings.

Humans began to exist, but our Creator didn't.
Well first, how do you know that there is a Creator?

And second, what makes you think that humans began to exist? Species on Earth have been evolving for billions of years. We all have less evolved creatures in our ancestry, as did those ancient creatures. The whole point of this thread is this: does that lineage go back to things that cannot be said to be "alive" by the modern understanding of the term? In other words, can life come from non-life, or consciousness come from non-consciousness? If so, than AI cannot be ruled out.

But when one states that there is a creator and that humans began to exist, one is simply presupposing that the answers to those questions are "no".

That's an easy way to get around the ad infinitum problem (people who use the cosmological argument do so all the time).
It's easy enough, but it also removes one from the discussion entirely. It takes the negation of the AI thesis as a fundamental principle.
 
Last edited:
  • #67


I see your analogy but it's based off of current mundane computer technology and really it only refutes the so-called "Turing Test" where in an AI can supposedly be determined by it's ability to hold a conversation. It's obvious nowadays that this test is not good enough. There are things that niether your analogy nor the "Turing Test" take into account.
Consider an infant. Is an infant born with a fully developed and cognizant mind? As far as we know an infant is born only with the simple operating programs in place. Feed, sleep, survive... simplistically speaking. The child then learns the rest of what it needs to become an intelligent being capable of communicating because the programing is open ended. The child is emersed in an atmosphere saturated with information that it absorbs and learns from.

There are computers that are capable of learning and adapting, perhaps only in a simplistic fashion so far but they are capable of this. The computer though, while it has the basic programing does not have nearly the level of information to absorb from which to learn. A child has five senses to work with which all bombard it's programing with a vast amount of information. The compter is stuck in the theoretical box. It receives information in a language that it doesn't understand and never will because it has no references by which to learn to understand even if it were capable of learning. It can upload an encyclopedia but if it has no means by which to experience an aardvark or have experiences that will lead it to some understanding of what these descriptive terms are then it never will understand. Your analogy requires a computer to not be able to ever learn, which they can, and to never be able to have eyes and ears by which to identify the strings of chinese characters with even an object or colour, a possibility which remains to be seen.
 
  • #68
175
0
Zantra said:
If we take this analogy and work with it, let's say computers eventually advance to the point where they become "self aware" and are capable of learning- this will increase exponentially, limited only by the hardware. Eventually they become capable of self creation, Essentially "learning how to make thier own language" or "rules" as it were. Then they will become independent and autonomous- capable of sustaning themselves and learning without human input.
A number of problems here. One, you're kind of just assuming that can computers can be self-aware, which seems like a bit of question begging given what we've learned from the Chinese room thought experiment. Besides, how could machines (regardless of who or what builds them) possibly understand human input? You'd need something other than complex rules manipulating input for literal understanding to exist, as the Chinese room story shows. What could the designer (human or otherwise) add to make a computer understand? A magical ball of yarn?
 
  • #69
175
0
Tom Mattson said:
Well, if we apply the rule of origin only to entities that begin to exist (which only makes sense) we wouldn't necessarily have that problem.
But it seems to me that it just swaps that problem for other problems, such as making a priori assumptions about both creators and conscious beings.
You're going to be making a priori assumptions regardless of what you do. As a mirror for the cosmological argument, "Anything that begins to exist has a cause" also has an a priori assumption: ex nihilo nihil fit. But I believe this one to be quite reasonable. The disputable point will be what kinds of a priori assumptions are acceptable. In any case, the ad infinitum problem doesn't exist. You could criticize the a priori assumptions, but that would be another matter.


Well first, how do you know that there is a Creator?
A number of reasons. The existence of the human soul, the metaphysical impossibility of an infinite past etc. but that sort of thing is for another thread.


And second, what makes you think that humans began to exist?
The same reasons why the vast majority of scientists believe humans began to exist perhaps? (For one, the physical universe may be old but its age is still finite.)


Species on Earth have been evolving for billions of years. We all have less evolved creatures in our ancestry, as did those ancient creatures. The whole point of this thread is this: does that lineage go back to things that cannot be said to be "alive" by the modern understanding of the term? In other words, can life come from non-life, or consciousness come from non-consciousness? If so, than AI cannot be ruled out.
You're forgetting something: the purely physical world can't account for the human soul. A few cells can develop into a full-grown human being, so in that sense it would seem we have consciousness from non-consciousness. But where does the person's soul come from? A purely physical world cannot account for that. And if all we humans do when we create machines is manipulate matter, we will never create strong AI (confer the Chinese room thought experiment).

That's an easy way to get around the ad infinitum problem (people who use the cosmological argument do so all the time).
It's easy enough, but it also removes one from the discussion entirely. It takes the negation of the AI thesis as a fundamental principle.
Why does the postulate "any intelligent entity that begins to exist must have another intelligent entity as a cause" rule out strong AI? After all, we humans are intelligent entities who would be building intelligent entities if we created strong AI.
 
  • #70
175
0
TheStatutoryApe said:
I see your analogy but it's based off of current mundane computer technology and really it only refutes the so-called "Turing Test" where in an AI can supposedly be determined by it's ability to hold a conversation.
I'm not sure that this is what the thought experiment only rules out. Given what we've learned from the Chinese room, how could machines possibly understand? You'd need something other than complex rules manipulating input for literal understanding to exist, as the Chinese room story demonstrates. What could the designer possibly add to make a computer understand? A magical ball of yarn?


There are things that niether your analogy nor the "Turing Test" take into account. Consider an infant. Is an infant born with a fully developed and cognizant mind? As far as we know an infant is born only with the simple operating programs in place.
I wouldn't say only "simple programs." The baby has other things like consciousness and a soul, something machines don't and (I suspect) can't have.


The child then learns the rest of what it needs to become an intelligent being capable of communicating because the programing is open ended.
….
There are computers that are capable of learning and adapting
….
Your analogy requires a computer to not be able to ever learn
Easily fixed with a little modification. We could also make the Chinese room "open ended" via the person using the rulebook and some extra paper to write down more data, procedures, etc. ultimately based upon the complex set of instructions in the rulebook (this mirrors the learning algorithms of a computer program). And yet the man still doesn't understand a word of Chinese.
 
  • #71
1,356
2
"What could the designer possibly add to make a computer understand? A magical ball of yarn?"

IT seems like your dictating that the programmer should program logic rules into the cmoputer...this is not the case in adaptive learning. Its not based on coding principles of logic...if this then do this else if this then do that else do something.
Some of the principles of adaptive learning is learnign lthe way a child would.
 
Last edited:
  • #72
hypnagogue
Staff Emeritus
Science Advisor
Gold Member
2,244
2
What does it mean to understand something?
 
  • #73
133
0
Answer: No
Possible: Yes
What would happen if it did happen?: Hell would break lose.

Humans are analog meaning they can tap into an unlimited amount of numbers that stretch the universe wide.
Robots can only stretch across a set of numbers and other things that have been preprogrammed.

A.I. is suppose to go beyond that programming.

However, they would have to be able to tap into the analog features we as humans have. Once they can do that, than yes, they would be as smart as humans. However, one questions how they do this. How do we as humans do it?

I think it would be the bastard creation of all life forms to give a dead piece of machine the ability to become analog.

pseudoscience coming in...sci-fi eventually becomes real though...

Frogs could become just as intelligent as humans with the right work. Robots however are less intelligent than frogs and only as intelligent as their designers. Only when they can tap into the same power as a frog and learn to enhance themselves from there, then they become powerful enough to take control of analog thinking thus their abilities can stretch as far as man.

They will never be more intelligent.
We are as intelligent as any other life form.
You now must question: What is "smart" what is "intelligence"
 
  • #74
1,356
2
what does analog have to do with anything besides sensory systems?
 
  • #75
hypnagogue
Staff Emeritus
Science Advisor
Gold Member
2,244
2
Bio-Hazard said:
However, they would have to be able to tap into the analog features we as humans have.
What exactly do you mean by this? In fact, computation in the human brain is essentially digital-- either a neuron undergoes an action potential or it does not. In principle, there is nothing about the way the human brain computes that could not be replicated in an artificial system.
 

Related Threads on Can Artificial Intelligence ever reach Human Intelligence?

Replies
38
Views
27K
  • Last Post
Replies
16
Views
4K
  • Last Post
2
Replies
32
Views
7K
  • Last Post
Replies
5
Views
3K
Replies
26
Views
4K
Replies
5
Views
268
  • Last Post
Replies
12
Views
4K
Replies
42
Views
4K
Replies
21
Views
2K
Top