Can Artificial Intelligence ever reach Human Intelligence?

In summary: AI. If we create machines that can think, feel, and reason like humans, we may be in trouble. ;-)AI can never reach human intelligence because it would require a decision making process that a computer cannot replicate.

AI ever equal to Human Intelligence?

  • Yes

    Votes: 51 56.7%
  • No

    Votes: 39 43.3%

  • Total voters
    90
  • #1
StykFacE
26
0
1st time post here... thought i'd post up something that causes much debate over... but a good topic. ;-) (please keep it level-minded and not a heated argument)

Question: Can Artificial Intelligence ever reach Human Intelligence?

please give your thoughts... i vote no.
 
Physics news on Phys.org
  • #2
I'm pretty sure my cell phone has more intelligence then some of the people I have met...
 
  • #3
Pengwuino said:
I'm pretty sure my cell phone has more intelligence then some of the people I have met...
lol, funny.

so your cell phone can think on its own? that's a pretty smart cell phone you got there.
 
  • #4
StykFacE said:
lol, funny.

so your cell phone can think on its own? that's a pretty smart cell phone you got there.

No it can't think on its own. Now think of those implications.
 
  • #5
whether humans make smart, or dumb, desicions, the level of complexity is far greater than a computer will ever be.

i think that's what makes the difference mostly.
 
  • #6
Though not strictly artificial intelligence, right now I am using a program called Dragon NaturallySpeaking version 8 to dictate my comment and have these words translated into text automatically.
 
  • #7
pallidin said:
Though not strictly artificial intelligence, right now I am using a program called Dragon NaturallySpeaking version 8 to dictate my comment and have these words translated into text automatically.
so what is the point...? lol, I'm not sure i follow...
AI vs human intelligence is the issue at hand. ;-)
 
  • #8
I believe that program does have to make "decisions" on what your speech patterns mean and all.
 
  • #9
Pengwuino said:
I believe that program does have to make "decisions" on what your speech patterns mean and all.
no, a program is... "programmed". lol, it only does what it was programmed to do. there is no decision making process. a computer merely calculates numbers, and that's all a computer will ever do, no matter how advanced. ;-)
 
  • #10
StykFacE said:
no, a program is... "programmed". lol, it only does what it was programmed to do. there is no decision making process. a computer merely calculates numbers, and that's all a computer will ever do, no matter how advanced. ;-)

Using that definition, AI is undefinable. Your thread is thus, useless.
 
  • #11
Pengwuino said:
... AI is undefinable...

how so? please comment... :biggrin:
 
  • #12
StykFacE said:
no, a program is... "programmed". lol, it only does what it was programmed to do. there is no decision making process. a computer merely calculates numbers, and that's all a computer will ever do, no matter how advanced. ;-)

Most people who deal with AI have better definitions then this when it comes to AI. Your basically saying the only platform AI is going to be used with is intrinsically incapable of using AI.
 
  • #13
StykFacE said:
...there is no decision making process. a computer merely calculates numbers, and that's all a computer will ever do, no matter how advanced. ;-)
How do you know that we do not do the same thing? As Pengwuino stated, your definition is useless.

What is a decision making process? Think of it as you are studying,
Sub Chapterend()
If Sleepiness<10 Then
Study(NextChapter)
Else
Goto Bed.AtYourRoom
End If
End Sub
OR
`Waa I am so sleepy Id better study that Bessel Functions tomorrow...`
One way of thinking of AI (and making so-called intelligent robots) is that taking a `pleasure function` as a base and letting the machine decide which action avbailable makes it increase the most. This places `instincts`. For example a robot's bumping into a wall decreases its p. function but recharging its battery increases it and so on. What would you get? Robots addicted to charge, as we may be addicted to sex etc.
 
  • #14
sure can if you can code 100billion neurons and 10000synapse on average per neuron and given it the sensory/motor skills of a human. It might work a bit slower sort of like a child...but there are movements in california and colorado to build hardware...and i already think colorado has a machine that's like 3-4 years old can't remember what its called though. ohoh did i forgot to mention you got to raise it for like 10-15 years.
 
  • #15
If you mean "raw intelligence", computers can already beat the best chess champions in the world, so I'm certain that surpassing human intelligence in complexity (think multi-tasking to the extreme) is inevitable. But programming it with emotions and intuitiveness could prove to be much more complexed. You're trying to teach a computer to ignore logic based on a "feeling". In that sense it would be very difficult to emulate us.

However the deeper question I think, isn't weather or not we CAN do this, but if we SHOULD. Machines with superior intelligence who are self aware may constitute a threat if they are given sufficient power and control. The counter to this of course, is that we simply keep them in the box and don't give them arms and legs to pummel us with. However society craves simplicity and convenience- the draw of a robot nanny may be too much to resist.
 
  • #16
if you keep them in a box and take off the limbs how are you they going to grow =]..i mean what if we were to do that to a baby.
 
  • #17
Humans can be creative; Computers cannot.

Creativity makes Humans different from Computers.

It's possible, but what'll the telltale difference between AI and Real I be?

That, and if computers had free will, what makes us so sure of no revolt?

Above the above poster's post, shortened into 3 sentences, and agreed with.
 
  • #18
Zantra said:
However the deeper question I think, isn't weather or not we CAN do this, but if we SHOULD. Machines with superior intelligence who are self aware may constitute a threat if they are given sufficient power and control. The counter to this of course, is that we simply keep them in the box and don't give them arms and legs to pummel us with. However society craves simplicity and convenience- the draw of a robot nanny may be too much to resist.

great statement, this is the kind of talk I'm looking for. however... can we ever 'create' an artificial conscience, for a computer to actually think on its own, with emotions and feelings? sure, bots and computers can simulate us very much so, but there's still a difference. i think that's a key thought... conscience. that is a very complex intelligence in itself. :smile:
 
  • #19
"Creativity makes Humans different from Computers" ?!? whose to say computers can't be creative? you ever seen someone code a program with 90%
precompiler directives?
 
  • #20
neurocomp2003 said:
"Creativity makes Humans different from Computers" ?!? whose to say computers can't be creative? you ever seen someone code a program with 90%
precompiler directives?
could a computer ever have a "gut feeling" regarding a situation?

could perhaps a computer ever physically feel pain from an emotion?
 
  • #21
sure. give it similar pain sensory receptors to a human than evolve it by coding billiions of neurons and trillions of synapse. Don't forget a child is born 9 months before it actually comes outta teh womb. Does it feel guts/pain while in the womb...perhaps...but how does that come to be?

Then when the child grows up it begins to know these sensations. What you expect outta a computer is to instantly have these sensations..why?
 
  • #22
I think your question should be weather or not in the future we will be able to make a computer think and feel like a human
my answer is not in the near future but i think eventually we are going to get there
the way i see it is that the problem is with the software rather than the hardware to be able to develop a program that can make one single decision will be a huge leap in the field.


http://en.wikipedia.org/wiki/Artificial_intelligence
 
  • #23
neurocomp2003 said:
sure. give it similar pain sensory receptors to a human than evolve it by coding billiions of neurons and trillions of synapse. Don't forget a child is born 9 months before it actually comes outta teh womb. Does it feel guts/pain while in the womb...perhaps...but how does that come to be?

Then when the child grows up it begins to know these sensations. What you expect outta a computer is to instantly have these sensations..why?
hmmm, "pain sensory receptors"...? is this something that will physically make the computer 'feel', or simply receptors that tell the central processing unit that it's 'feeling' pain, then it reacts to it.

lol, sorry but I'm having a hard time believing that something, that is not alive, can actually feel pain.

yes we have 'receptors', but when it tells are brain that there is pain, we literally feel it.

;-)
 
  • #24
how do you define alive...are we not the sum of our components?

"but when it tells are brain that there is pain, we literally" so are you talking about in the faked sense?
 
  • #25
neurocomp2003 said:
how do you define alive...are we not the sum of our components?

"but when it tells are brain that there is pain, we literally" so are you talking about in the faked sense?

What I’m talking about, is the difference in "the human brain and the human conscience". they are not within the same. The human brain is that of a computer system - it calculates data, keeps our body running properly, so forth. However, it does not coincide with the conscience - physically. That is the complexity, and technology, that computers will never reach. I'll explain...

Say a person has brain surgery and lost 20% of their brain due to a tumor, but lives. Do you lose 20% of your personality? or 20% of your conscience? (remember I’m talking about someone who lives and still has 100% functionality in life as before). No, you do not lose anything. That's because the conscience and brain are separate entities.

Now, say a highly advanced computer system that has the highest AI available in the future, so far advanced you cannot tell the difference in it and a human. Now take 20% of its CPU. What would you think would happen? The complexity and technology of the human brain & conscience together is the intelligence a computer will never reach. A conscience does not consist of particles or matter. You cannot create one for a highly advanced computer system. A computer will only be programmed, with learning capabilities at best.

IMO of course.

;-)
 
  • #26
I think the difference between human and computer , in addition to what mentioned before , is that human can deal with unexpected situation when it demands it . Computers only function on some pre-set rules . So in this way humans can learn from experience whereas computers can't . Even though there is an active research in this area to make computers learn from experience but with no fruitful results .
 
  • #27
"You cannot create one for a highly advanced computer system. A computer will only be programmed, with learning capabilities at best"?
how do you come to assume this...and what is dna code?
ANd thus the cited statements you conclude that physics has no fundamental rules and no fundamental structure because isn't that what a brain is a physical structure?

and if consciousness and the brain are so separate...would you let me surgically remove your brain or even some portion>50%?anyways I'm sick of these debates because i realize i had one similar in these forums liek in the middle of summer. lastly isn't concsioucness grown from a fetus?? who lays 9mths in a womb and only begins talking at the age of 1?
 
Last edited:
  • #28
Nice StykFacE.

I'm very optimistic we'll create one day qualitatively different computational machines which simulate human cognition to the extent that creativity and discovery will emerge from them. I suspect we will have to "grow a mind" in some way similar to how humans do so in early development. The brain is massively non-linear and recurrent (outputs go back to inputs). That's why AI, in my opinion, has been a failure for the past 30 or so years: They're using linear mind-sets and devices (current digital computers) to model a highly non-linear one. A novel device (non-linear I suspect) will emerge from technology one day and it's output will be qualitatively different than the output of present computers: At first it won't look like much and the results will be tough to interpret. Gradually, our paradigms for what constitutes cognition will change qualitatively as we begin to grow a new form of intelligence.

Would be nice to be around to see it happen. :smile:
 
  • #29
saltydog said:
Nice StykFacE.

I'm very optimistic we'll create one day qualitatively different computational machines which simulate human cognition to the extent that creativity and discovery will emerge from them. I suspect we will have to "grow a mind" in some way similar to how humans do so in early development. The brain is massively non-linear and recurrent (outputs go back to inputs). That's why AI, in my opinion, has been a failure for the past 30 or so years: They're using linear mind-sets and devices (current digital computers) to model a highly non-linear one. A novel device (non-linear I suspect) will emerge from technology one day and it's output will be qualitatively different than the output of present computers: At first it won't look like much and the results will be tough to interpret. Gradually, our paradigms for what constitutes cognition will change qualitatively as we begin to grow a new form of intelligence.

Would be nice to be around to see it happen. :smile:
I see what you're saying. However, i still believe that a conscience can never be created by human beings. I believe that computers will become so advanced they will simulate, or seem as if they are aware, however computers are consisted completely on physical matter. The conscience is also referenced as the soul, or spirit; it is the inner self and is a nonphysical item. This is the main separation key.

neurocomp2003:
"and what is dna code" - well it's not a conscience, now is it? lol, its genetic code.

"a brain is a physical structure" - of course it is, however the conscience is not. "If you apply a physical process to physical matter, you're going to get a different arrangement of physical materials. No matter how complex, it's still going to be physical." (quoted by J. P. Moreland) this is a VERY true statement in the world of physics and science. Can you argue?? Of course not. Now the conscience or soul... what very complex physical processes of matter arranged in such a way that we now have a non-physical state of awareness? please explain lol ;-)

"surgically remove your brain or even some portion>50%" - losing 50% of the brain is common with surgeries, accidents, strokes, etc. the medical field has records of many cases I'm sure, and some people still live 100% full lives. 'but they lost over 50% of their PHYSICAL brain? how can they PHYISICALLY still be the same person they were before hand?'

Maybe our NON PHYSICAL conscience is within our PHYSICAL DNA. lol

;-)
 
Last edited:
  • #30
StykFacE said:
I see what you're saying. However, i still believe that a conscience can never be created by human beings. I believe that computers will become so advanced they will simulate, or seem as if they are aware, however computers are consisted completely on physical matter. The conscience is also referenced as the soul, or spirit; it is the inner self and is a nonphysical item. This is the main separation key.

neurocomp2003:
"and what is dna code" - well it's not a conscience, now is it? lol, its genetic code.

"a brain is a physical structure" - of course it is, however the conscience is not. "If you apply a physical process to physical matter, you're going to get a different arrangement of physical materials. No matter how complex, it's still going to be physical." (quoted by J. P. Moreland) this is a VERY true statement in the world of physics and science. Can you argue?? Of course not. Now the conscience or soul... what very complex physical processes of matter arranged in such a way that we now have a non-physical state of awareness? please explain lol ;-)

"surgically remove your brain or even some portion>50%" - losing 50% of the brain is common with surgeries, accidents, strokes, etc. the medical field has records of many cases I'm sure, and some people still live 100% full lives. 'but they lost over 50% of their PHYSICAL brain? how can they PHYISICALLY still be the same person they were before hand?'

Maybe our NON PHYSICAL conscience is within our PHYSICAL DNA. lol

;-)

Well, they argue in here all the time about what consciousness constitutes. We have too groups: One thinks consciousness is something more than material existence, the other group thinks otherwise. You are in the former I see.

I just don't understand why people think consciousness is something beyond physical interpretation and construction. It's just an emergent property of large assemblies of neural networks. That's it. Nothing else in my view. We really are fragile creatures still limited in many ways by our beliefs about the world, ourselves, death, and life.

I use to look outside of my window at the world and wonder why about a lot of things. Ten years ago I started studying non-linear dynamics . . . I no longer wonder why about a lot of things. :smile:
 
  • #31
saltydog said:
I just don't understand why people think consciousness is something beyond physical interpretation and construction. It's just an emergent property of large assemblies of neural networks.

You've been watching too much Star Trek, lol.

Here's one way to understand it. Is a brick conscious? No. How about a building? No. How about any other highly complex stacking and arrangement of bricks? Again, I'd have to say no.

The same thing could be said for atoms. For a lot of people, it just seems too implausible that a collection of atoms could somehow "feel" or be conscious. Consciousness seems to require something fundamentally different.

Confer Searle's Chinese Room thought experiment (it also seems especially relevant to this discussion on A.I., but I'll have to post on it later because I’ve run out of time).
 
  • #32
saltydog said:
Ten years ago I started studying non-linear dynamics . . . I no longer wonder why about a lot of things. :smile:

I know nothing about non-linear dynamics, so I will not argue with you there... lol

I think I've stated enough, until neurocomp2003 disputes... I think we're on the same level of thinking, just different sides of the tracks. :rofl:
 
  • #33
I think, that if you cast aside any notions of religious connotation, any spiritual aspect, and look at the pure mechanics, the human brain and all related processes are nothing but an advanced machine. Allbeit the construct is flesh and tissue vs silicon and metal, but it's still something that can be duplicated. You may speak to me about "soul" or "gut instinct" but that comes down to chemical processes in our brain and neurotransmitters affecting a decision that would otherwise be based on logic and experience. Can we teach a machine to "love"? Well the simple answer is no, allthough we can definitely teach the machine to duplicate it. But we can definitely create machines that can outthink us, and that eventually can become self-aware. We cannot expect to duplicate to ultimate satisfaction that which we don't fully grasp. Love is induced by the release of endorphins and dopamine neurotransmitters. Is that how we think of love? No. But that is the process, and it can eventually be duplicated. The question is can we make a machine that will "fall in love" or develop a profound sense of attachment to something or someone. If the eventual answer were yes, how would that make you feel? Would that impress you,frighten you, or disgust you?

I think it comes down to our own vanity- Can we withstand our creation outpacing us in development? When our child outdistances us, will we feel a pang of jealousy? And not only that, but how will that machine then view us, having surpassed us? As an annoyance? When machines can love, will that lessen our importance and change the dynamics of machines? Questions that will plague us as AI develops.
 
  • #34
tsishammer: but you see humans have sensory systems that feed into the the brain, and the entire human system flows..does a stack of brick walls flow? perhaps from a philosophical standpoint and that the adaptation that a brick/wall has accustom to is to not respond at all. The entire of an artificial system is teh concept of fluidic motion of signals to represent a pattern (like in steve grand's book). and where not talking about a few hundred atoms here we are talking about:
(~100billion neurons*#atom/neuron+~10000*#neurons*#atom/synapse)
Thats how many atoms are in teh brain and mostlikely a rough guess would be
10^(25-30) atoms. Try counting that high.

As for john searle's highly used argument: this can also be applied to humans' but because we have such big egos we do not adhere to it. That is to say that we could have cells passing information that gives this resemblance of what we call a higher cognitive state but in all reality...it may just be a byproduct like in searles argument against cognitive robotics. But as the superior beings that we are we seem to neglect this side of the argument.

Skyeface: before i continue to argue, may i know your educational background...because nonlinear dynamics/multiagent learning is a huge part of my thinking process.
[]as for the 50% argument...show me an experiment that has lesioned
teh prefrontal cortex(decision making) and occipital/parietal(LIP/7a-imagery) regions and has the patient living a non vegetative life. Also teh hippocampus. And I'm not talking partial lesions.

Zantra: "Can we teach a machine to "love"? No." I'd have to disagree, Personally emotions IMO comes down to interaction of a child and his surroundings(relatives/friends/parents) not simply by having NTs in your system.
 
Last edited:
  • #35
Tisthammerw said:
Here's one way to understand it. Is a brick conscious? No. How about a building? No. How about any other highly complex stacking and arrangement of bricks? Again, I'd have to say no.

The same thing could be said for atoms. For a lot of people, it just seems too implausible that a collection of atoms could somehow "feel" or be conscious. Consciousness seems to require something fundamentally different.

The complex stack of bricks is static. Nothing happens. Same dif with neurons if they were static. The point is that neurons are dynamic. That it the key: the mind, I belive, is a dynamic phenomenon independent of the substrate it finds itself in. The only example we have to date is a biological substrate and so naturally the urge is to associate mind to be a particular property of living systems. Star Trek has nothing to do with this. Frankly, get the bricks behaving in the same dynamic fashion as neurons and a brick conscience will emerge as far as I'm concerned. Marbles too for that matter. :smile:

Edit: I see above Neurocomp and I got to the bricks at the same time although I do not wish to suggest he's willing to go all the way to brick conscience like me :smile: .
 
Last edited:

Similar threads

  • General Discussion
Replies
9
Views
1K
  • General Discussion
Replies
21
Views
1K
  • General Discussion
Replies
4
Views
605
Replies
10
Views
1K
  • General Discussion
Replies
3
Views
789
  • Sci-Fi Writing and World Building
Replies
0
Views
627
Replies
1
Views
3K
  • General Discussion
Replies
21
Views
2K
Replies
10
Views
2K
Replies
15
Views
6K
Back
Top