Can Artificial Intelligence ever reach Human Intelligence?

In summary: AI. If we create machines that can think, feel, and reason like humans, we may be in trouble. ;-)AI can never reach human intelligence because it would require a decision making process that a computer cannot replicate.

AI ever equal to Human Intelligence?

  • Yes

    Votes: 51 56.7%
  • No

    Votes: 39 43.3%

  • Total voters
    90
  • #71
"What could the designer possibly add to make a computer understand? A magical ball of yarn?"

IT seems like your dictating that the programmer should program logic rules into the cmoputer...this is not the case in adaptive learning. Its not based on coding principles of logic...if this then do this else if this then do that else do something.
Some of the principles of adaptive learning is learnign lthe way a child would.
 
Last edited:
Physics news on Phys.org
  • #72
What does it mean to understand something?
 
  • #73
Answer: No
Possible: Yes
What would happen if it did happen?: Hell would break lose.

Humans are analog meaning they can tap into an unlimited amount of numbers that stretch the universe wide.
Robots can only stretch across a set of numbers and other things that have been preprogrammed.

A.I. is suppose to go beyond that programming.

However, they would have to be able to tap into the analog features we as humans have. Once they can do that, than yes, they would be as smart as humans. However, one questions how they do this. How do we as humans do it?

I think it would be the bastard creation of all life forms to give a dead piece of machine the ability to become analog.

pseudoscience coming in...sci-fi eventually becomes real though...

Frogs could become just as intelligent as humans with the right work. Robots however are less intelligent than frogs and only as intelligent as their designers. Only when they can tap into the same power as a frog and learn to enhance themselves from there, then they become powerful enough to take control of analog thinking thus their abilities can stretch as far as man.

They will never be more intelligent.
We are as intelligent as any other life form.
You now must question: What is "smart" what is "intelligence"
 
  • #74
what does analog have to do with anything besides sensory systems?
 
  • #75
Bio-Hazard said:
However, they would have to be able to tap into the analog features we as humans have.
What exactly do you mean by this? In fact, computation in the human brain is essentially digital-- either a neuron undergoes an action potential or it does not. In principle, there is nothing about the way the human brain computes that could not be replicated in an artificial system.
 
  • #76
Tisthammerw said:
The disputable point will be what kinds of a priori assumptions are acceptable. In any case, the ad infinitum problem doesn't exist. You could criticize the a priori assumptions, but that would be another matter.

I acknowledge that the ad infinitum problem is solved by your postulate, but I do not think that the postulate is acceptable. All it does is fill in a gap in understanding with something else that is not known. What good does that do?

A number of reasons. The existence of the human soul, the metaphysical impossibility of an infinite past etc. but that sort of thing is for another thread.

I wouldn't agree that either of those reasons are self-evident, so I would still not accept the Causless Creator.

The same reasons why the vast majority of scientists believe humans began to exist perhaps? (For one, the physical universe may be old but its age is still finite.)

That is not a foregone conclusion. No one knows if our universe is not the product of a sequence of "Big Bangs" and "Big Crunches". We could very well be part of such a cycle. It's entirely possible that what exists, always existed.

You're forgetting something: the purely physical world can't account for the human soul. A few cells can develop into a full-grown human being, so in that sense it would seem we have consciousness from non-consciousness. But where does the person's soul come from? A purely physical world cannot account for that. And if all we humans do when we create machines is manipulate matter, we will never create strong AI (confer the Chinese room thought experiment).

I'm not "forgetting" about the human soul. I don't believe that such a thing exists. Assuming that it does exist is just another way of denying that machines can ever think like humans, which is the very topic under discussion.

Why does the postulate "any intelligent entity that begins to exist must have another intelligent entity as a cause" rule out strong AI? After all, we humans are intelligent entities who would be building intelligent entities if we created strong AI.

By itself, it doesn't. But I was under the impression that you were building on robert's argument, which does explicitly assert the impossibility of humans creating entities that are as intelligent as humans.
 
Last edited:
  • #77
Creating intelligence would require us to know enough about "intelligence" to design and program it. It would seem to me that we would have to have an a substantial understanding of the human thought process in order to pull it off. And that tends to become more of a philosophical and psychological issue rather than a engineering/design issue. To think we could design something that would figure itself out, is a bit far-fetched to me.
 
  • #78
neurocomp2003 said:
"What could the designer possibly add to make a computer understand? A magical ball of yarn?"

IT seems like your dictating that the programmer should program logic rules into the cmoputer...this is not the case in adaptive learning.

See the end of post #70.
 
  • #79
Tom Mattson said:
I acknowledge that the ad infinitum problem is solved by your postulate, but I do not think that the postulate is acceptable. All it does is fill in a gap in understanding with something else that is not known.

And what is that?


I wouldn't agree that either of those reasons are self-evident, so I would still not accept the Causless Creator.

Perhaps not self-evident, but there are arguments against the infinite past, arguments for the existence of the human soul etc. But these are best saved for another thread.


The same reasons why the vast majority of scientists believe humans began to exist perhaps? (For one, the physical universe may be old but its age is still finite.)

That is not a foregone conclusion. No one knows if our universe is not the product of a sequence of "Big Bangs" and "Big Crunches".

There are a number of reasons why the cyclical universe doesn't work scientifically, but one of them is the observed accelerated expansion rate of the universe.


It's entirely possible that what exists, always existed.

I disagree, but arguments against an infinite past are best saved for another thread.


You're forgetting something: the purely physical world can't account for the human soul. A few cells can develop into a full-grown human being, so in that sense it would seem we have consciousness from non-consciousness. But where does the person's soul come from? A purely physical world cannot account for that. And if all we humans do when we create machines is manipulate matter, we will never create strong AI (confer the Chinese room thought experiment).

I'm not "forgetting" about the human soul. I don't believe that such a thing exists. Assuming that it does exist is just another way of denying that machines can ever think like humans, which is the very topic under discussion.


I'm not just "assuming" it exists; I offer it as a possible explanation why humans are capable of understanding and why machines are not. In other words, alleged counterexamples of consciousness arising from non-consciousness aren't necessarily valid. In any case, there's still the matter of the Chinese room thought experiment.

P.S. I have an argument for the soul, albeit best saved for another thread; in a nutshell it shows that "if free will exists then the soul must exist" but I suspect you do not believe in free will.
 
  • #80
hypnagogue said:
What does it mean to understand something?

Very interesting Hypnagogue. Simple yet profound and not overlooked by me. :smile: I suspect all of you have entertained that notion here already in a previous thread. Would be interesting to read what you and the others have said about it. Me, well I'd lean to dynamics: a synchronizing of neural circuits to the dynamics of the phenomenon being understood. 2+2=4? Not sure what dynamics are involved in that one although I've been told by reputable sources that strange attractors may well be involved in memory recall. :smile:
 
  • #81
"Brain-state-in a box"
 
  • #82
Tisthammerw said:
I'm not sure that this is what the thought experiment only rules out. Given what we've learned from the Chinese room, how could machines possibly understand? You'd need something other than complex rules manipulating input for literal understanding to exist, as the Chinese room story demonstrates. What could the designer possibly add to make a computer understand? A magical ball of yarn?
The reason I believe that the scenario is aimed at the "Turing Conversation Test" is that it illustrates how a computer can easily emulate a conversation without actually needing to be sentient.

You seem to be ignoring some very important parts of my argument.
Rather than making rediculous comments about magic balls of yarn perhaps you can read my ideas on what could be done and comment on them instead?


Tisthammerw said:
I wouldn't say only "simple programs." The baby has other things like consciousness and a soul, something machines don't and (I suspect) can't have.
You are assuming here that the baby has a soul. There is no proof of the existence of a soul and even if it does exist there is no proof that this soul would be necessary for a being to be sentient. You are only assuming this a priori. Does a chimp have a soul? Chimps are capable of learning and understanding a language. Dolphins use language. Many different sorts of life forms use basic forms of communication. So really the question is I guess do you believe only humans have the capacity for sentience or only living things?



Tisthammerw said:
Easily fixed with a little modification. We could also make the Chinese room "open ended" via the person using the rulebook and some extra paper to write down more data, procedures, etc. ultimately based upon the complex set of instructions in the rulebook (this mirrors the learning algorithms of a computer program). And yet the man still doesn't understand a word of Chinese.
You see an open ended program wasn't my only criterion. As I stated earlier you seem to be ignoring very important parts of my argument and now I'll add that you are ignoring the implications of a computer being capable of learning. If the man in the room is capable of learning he can begin to pick up on the pattern if the language code it is using and even if it can't figure out what the words mean it can begin decifering something about the language being used. One of my main points in my argument that I mentioned you did not comment on was sensory information input and experience. This goes hand in hand with the learning ability. If the man in the box was capable when ever he saw a word to have some sort of sensory input that would give him an idea of the meaning of the word then he would begin to learn the language, no? Computers don't have this capacity, yet. If you took a human brain, put it in a box, and kept it alive would it be capable of learning anything without somesort of sensory input? Don't you think that it may very well be nearly as limited as your average computer?
 
  • #83
ngek! social scientist, a robot?
 
  • #84
I haven't read all the posts, but computers have already leaped the first test of human-like intelligence - chess. Chess is incredibly complex. It includes some of the most obscure and difficult mathematical representations known. And Big Blue has officially defeated the human world chess champion. How impressive is that? Have you guys played a decent chess computer lately? They are diabolically clever. I'm think I'm a decent chess player [USCF master], but, my ten year old mephisto is still all I can handle... and it disembowels me in one minute speed chess games.
 
  • #85
our minds are made of electrical impulses like a computer, we can only process things on a binary basis. computers have memory, just like humans, the only difference between us and a computer, we learn, and know what to delete in our minds automaticly, a computer does not know how to learn, if a computer could be made to learn then yes a computer would be just like a human, if not much better
 
  • #86
Tisthammerw said:
Tom: All it does is fill in a gap in understanding with something else that is not known.

Tisthammerw: And what is that?

What is what? The gap in understanding, or the unknown thing that your postulate tries to fill it with?

There are a number of reasons why the cyclical universe doesn't work scientifically, but one of them is the observed accelerated expansion rate of the universe.

As you've noted this is probably better suited for another thread, but you do need to read up on this. There are in fact cyclical models of the universe which include periods of accelerated expansion.

See this for example:

http://pupgg.princeton.edu/www/jh/news/STEINHARDT_TUROK_THEORY.HTML

I'm not just "assuming" it (edit: the human soul) exists;

Begging your pardon, but yes you are. You didn't deduce it from anything else on the table, so it was obviously introduced as an assumed postulate.

I offer it as a possible explanation why humans are capable of understanding and why machines are not.

But this is just another attempt to explain one unknown in terms of another. How could it lead to any real understanding?

P.S. I have an argument for the soul, albeit best saved for another thread; in a nutshell it shows that "if free will exists then the soul must exist" but I suspect you do not believe in free will.

You suspect rightly. But anyway, regarding the existence of the soul or a creator, it's not an argument I'm looking for. It's evidence.
 
Last edited by a moderator:
  • #87
TheStatutoryApe said:
You seem to be ignoring some very important parts of my argument.

Like what? You made the point about a learning computer, and I addressed that.

Rather than making rediculous comments about magic balls of yarn

I don't think its ridiculous. I asked what else one could add to a computer to make it understand, and there doesn't appear to be anything other than "magic."


You are assuming here that the baby has a soul. There is no proof of the existence of a soul

See this web page for why (in part) I believe there is evidence for the soul.

Anyway, my main point (and I should've mentioned this earlier) of the soul thing is that I offer it as a possible explanation why humans are capable of understanding and why machines are not. Some people claim that if humans can understand we can build machines to understand also, but that is not necessarily true.


and even if it does exist there is no proof that this soul would be necessary for a being to be sentient. You are only assuming this a priori.

Not really. I am using the Chinese room for one piece of evidential support. I ask you again, what could be added to the computer other than a set of rules for manipulating input to make it understand?


Does a chimp have a soul?

I believe that any sentience requires the incorporeal, but that is another matter.


So really the question is I guess do you believe only humans have the capacity for sentience or only living things?

So far it seems that only living things have the capacity for sentience. I have yet to find a satisfactory way of getting around the Chinese room thought experiment.


You see an open ended program wasn't my only criterion. As I stated earlier you seem to be ignoring very important parts of my argument and now I'll add that you are ignoring the implications of a computer being capable of learning.

Well, I did address the part of computer learning, remember? You seem to be ignoring some very important parts of my argument.


If the man in the room is capable of learning he can begin to pick up on the pattern if the language code

That's a bit of question begging. The symbols mean nothing to him. Consider this rule (using a made-up language):

If you see @#$% replace with ^%@af

Would you understand the meaning of @#$% merely because you've used the rule over and over again? I admit that maybe he can remember input-output patterns, but that's it. The man may be capable of learning a new language, but this clearly requires something other than a complex set of rules for input/output processing. My question: so what else do you have?


One of my main points in my argument that I mentioned you did not comment on was sensory information input and experience.

Well, the same holds true for my modified Chinese room thought experiment. The complex set of instructions tells the man what to do when new input (the Chinese messages) is received. New procedures and rules are created (ultimately based on the rulebook acting on input, which represents a computer program with learning algorithms), but the man still doesn't a word of Chinese.


This goes hand in hand with the learning ability. If the man in the box was capable when ever he saw a word to have some sort of sensory input that would give him an idea of the meaning of the word then he would begin to learn the language, no?

Absolutely, but note that this requires something other than a set of rules manipulating input. It's easy to say learning is possible when we already have an fully intelligent, thinking, sentient human being. But you seem to forget that I am attacking the idea that it’s possible to have a fully intelligent, thinking, sentient computer in the first place. My point is that if we limit ourselves to the input/output processing like we do in the Chinese room, we simply don't get real understanding. And as my modified Chinese room illustrates, we can't get real understanding even with learning algorithms. So I ask again, what else do you have?
 
  • #88
Tisthammerw said:
I don't think its ridiculous. I asked what else one could add to a computer to make it understand, and there doesn't appear to be anything other than "magic."

Yea, that's right magic: "a technology sufficiently advanced from ours will be indistinguishable from magic" (Author C. Clark). No doubt jet planes would have seems so during the middle-ages. It's all a matter of critical-points in innovation which usher-in qualitative change thus beginning a revolution. :smile:
 
  • #89
Tom Mattson said:
Tom: All it does is fill in a gap in understanding with something else that is not known.

Tisthammerw: And what is that?

What is what? The gap in understanding, or the unknown thing that your postulate tries to fill it with?

The unknown thing.


As you've noted this is probably better suited for another thread, but you do need to read up on this. There are in fact cyclical models of the universe which include periods of accelerated expansion.

See this for example:

http://pupgg.princeton.edu/www/jh/news/STEINHARDT_TUROK_THEORY.HTML

From the web page:

After 14 billion years, the expansion of the universe accelerates, as astronomers have recently observed. After trillions of years, the matter and radiation are almost completely dissipated and the expansion stalls. An energy field that pervades the universe then creates new matter and radiation, which restarts the cycle.

Sounds awfully speculative, a little ad hoc, like a deus ex machina of a story ("No sufficient matter observed? That's okay. You see, there's this unobserved energy field that creates a whole bunch of matter after trillions of years in the unobservable future to save the day!") and still not without problems (e.g. the second law of thermodynamics).


I'm not just "assuming" it (edit: the human soul) exists;

Begging your pardon, but yes you are.

You cut off an important part of my quote:

I'm not just "assuming" it exists; I offer it as a possible explanation why humans are capable of understanding and why machines are not. In other words, alleged counterexamples of consciousness arising from non-consciousness aren't necessarily valid.

That's the main purpose of me mentioning it (and I admit, I should've explained that earlier). If you want to see some evidential basis why I believe the soul exists, see this web page. Again though, this argument presupposes free will.


I offer it as a possible explanation why humans are capable of understanding and why machines are not.

But this is just another attempt to explain one unknown in terms of another. How could it lead to any real understanding?

Many explanations lead to entities that were previously unknown. Atomic theory postulates unobserved entities to explain data; but that doesn't mean they don't lead to any real understanding. We accept the existence of atoms because we believe we have rational reason to think they are real. The existence of the soul also has rational support and explains understanding, free will, moral responsibility etc. whereas physicalism is insufficient. At least, that's why I believe they lead to understanding.


P.S. I have an argument for the soul, albeit best saved for another thread; in a nutshell it shows that "if free will exists then the soul must exist" but I suspect you do not believe in free will.

You suspect rightly. But anyway, regarding the existence of the soul or a creator, it's not an argument I'm looking for. It's evidence.

Well, evidential arguments for the soul is evidence nonetheless. I'm looking for evidence too. For instance, my direct perceptions tell me I have free will whenever I make a decision. What evidence is there that free will does not exist? A hard determinist could say that my perceptions of volition and moral responsibility are illusory. But if I cannot trust my own perceptions, on what basis am I to believe anything, including the belief that free will does not exist? Apparently none. Determinism and physicalism collapse, and likewise strong AI (confer the Chinese room and variants thereof) seems to be based more on faith than reason.
 
Last edited by a moderator:
  • #90
saltydog said:
I don't think its ridiculous. I asked what else one could add to a computer to make it understand, and there doesn't appear to be anything other than "magic."

Yea, that's right magic: "a technology sufficiently advanced from ours will be indistinguishable from magic" (Author C. Clark). No doubt jet planes would have seems so during the middle-ages.

No matter how far technological progress continues, there will always be limits; physical laws for instance. The Chinese room (and variants thereof) still pose a critical problem for strong AI, and you haven't solved it. It is difficult to see how real understanding for a computer can be even theoretically possible (unlike many other pieces of speculative technology). As I've shown, merely manipulating input can't produce real understanding. So I ask, what else do you have?
 
  • #91
tisthammer: are you saying that there are physical laws to which adhere only to carbon based systems that silicon-based systems cannot ever acheive?

are you familiar with the terms "selfsimilar fractals on multiple scales"
also at the end of post70(you told me to lok) i am unsure what that has to do with adaptive techniques

also this concept of "understanding" do you believe it lies outside the brain? if so do you believ that the "soul" lies outside the rbain? and thus if one removes the brain the soul/understanding continue to funciton.

and remember we are not talkign abotu a desktop PC though we could if we were using wirless signals to transmit to arobotic body. We are talking about a robot with sensory informantion that a human child would have.

Lastly you speak of hte concept of a soul...if it travels from body to body why does a child not instantly speak outta the womb? Do you believe it remembers from a past existence...
if so that what physical realm(not necessarily ares) does this soul exist in?
If not then what does a soul represent if its transferance to another body does not bring with it knowledge, languages,emotions,artistic talents what exactly is the purpose of a "soul" the way you would define it?
if it does not travel from body to body then does it exist only when a child exists yet you still believe it not to hav e aphysical presence in our known physics?

IMO, i blieve we must discuss your idea of a soul(i believe you said its for a different thread) here because it it relevant to the discussion at hand. Firstly
we must clarify certain terminology(if you had posted your terms above please refer me to them). Awareness, Consciousness, UNderstanding, Soul. Since the terms soul and understanding seem to be a big part of your argument that AI(lets use the proper term now, rather than just computer) can never have this spirtual existence that you speak of and thus will only be mimicking a human no matter how real it could be. Which leads me to also ask isn't it possibel that human consciousness is only a byproduct of it fundamental structures of networks that lie within the brain? The constant flow of information to your language zones allows them to produce the words in your head making you "believe" that you are actually thinking and this continues for all time?
 
  • #92
neurocomp2003 said:
tisthammer: are you saying that there are physical laws to which adhere only to carbon based systems that silicon-based systems cannot ever acheive?

No, but I am saying there are principles operating in reality that seem to prevent a computer from understanding (e.g. one that the Chinese room illustrates). Computer programs just don't seem capable of doing the job.


are you familiar with the terms "selfsimilar fractals on multiple scales"

I can guess what it means (I know what fractals are) but I'm unfamiliar with the phrase.


also this concept of "understanding" do you believe it lies outside the brain?

Short answer, yes. I believe that understanding cannot exist solely in the physical brain, because physical processes themselves seem insufficient to create understanding. If so, an incorporeal (i.e. soul) component is required.


if so do you believ that the "soul" lies outside the rbain?

The metaphysics are unknown, but if I had to guess I'd say it lies "within" the brain.


and thus if one removes the brain the soul/understanding continue to funciton.

Picture a man in a building. He has windows to the outside world, and a telephone as well. Suppose someone comes along and paints the windows black, cuts telephone lines, etc. But once the building is gone, he can get up and leave. I think the same sort of thing is true for brain damage. The person can't receive the "inputs" from the physical brain and/or communicate the "outputs." If the physical brain is completely destroyed, understanding (which requires inputs) might be possible but would seem to require another mechanism besides the physical brain. This may be possible, and thus so is an afterlife.


and remember we are not talkign abotu a desktop PC though we could if we were using wirless signals to transmit to arobotic body. We are talking about a robot with sensory informantion that a human child would have.

That maybe true, but the same principles apply: manipulating input through a system of complex rules to produce "valid" output. This doesn't and can't produce understanding as the Chinese room demonstrates. Visual data is still represented as 1s and 0s, rules of manipulation are still being applied etc.


Lastly you speak of hte concept of a soul...if it travels from body to body why does a child not instantly speak outta the womb? Do you believe it remembers from a past existence...

I don't think I believe it travels from "body to body," and I do not believe in reincarnation.

Why does the baby not speak outside of the womb? Well, it hasn't learned how to yet.


if it does not travel from body to body then does it exist only when a child exists yet you still believe it not to hav e aphysical presence in our known physics?

I don't know when the soul is created; perhaps it is only when the brain is sufficiently developed to provide inputs. BTW, here's my metaphysical model:

Inputs(sensory perceptions, memories etc.) -> Soul -> Outputs (actions etc.)

The brain has to be advanced enough to provide adequate input. In a way, the physical body and brain provides the “hardware” for the soul to do its work (storing memories, providing inputs, a means to do calculations, etc.).


IMO, i blieve we must discuss your idea of a soul(i believe you said its for a different thread) here because it it relevant to the discussion at hand.

As you wish. Feel free to start a thread in the metaphysics section of this forum. I'll be happy to answer any questions.


Firstly
we must clarify certain terminology(if you had posted your terms above please refer me to them). Awareness, Consciousness, UNderstanding, Soul.

The soul is the incorporeal basis of oneself; in my metaphysical theory it is the "receiver" of the inputs and the ultimate "initiator" of outputs. Awareness, consciousness, and understanding are the "ordinary" meanings as I use them (i.e. if you don't know what they mean, feel free to consult your dictionary; as I attach no "special" meaning to them).


Since the terms soul and understanding seem to be a big part of your argument that AI(lets use the proper term now, rather than just computer) can never have this spirtual existence that you speak of and thus will only be mimicking a human no matter how real it could be. Which leads me to also ask isn't it possibel that human consciousness is only a byproduct of it fundamental structures of networks that lie within the brain?

No, because there would be no "receiver" to interpret the various chemical reactions and electrical activity occurring in the brain. (Otherwise, it would sort of be like the Chinese room.)


The constant flow of information to your language zones allows them to produce the words in your head making you "believe" that you are actually thinking and this continues for all time?

The words (and various other inputs) may come from the physical brain, but a soul would still be necessary if real understanding is to take place.
 
  • #93
that is correct, the speech software does make choices, that have been taught to it,by a teacher! the programmers only job was to write general learning software, not to teach it how to behave. Unless my job has been a fantasy for the last 15 years.
 
  • #94
if you believe the soul exists & is undefinable within the chemical processes going on in the brain, then the answer is no, but if you believe the brain is the sum of its parts then the answer is yes
 
  • #95
tishammer: heh i think u may need to define your definition of a rule. Is it like a physics based rule-where interaction, collisions and forces are domintate or a math/cs based rule where logic is more prevalent.
 
  • #96
hypnagogue said:
What exactly do you mean by this? In fact, computation in the human brain is essentially digital-- either a neuron undergoes an action potential or it does not. In principle, there is nothing about the way the human brain computes that could not be replicated in an artificial system.

They have a step by step processes, we have parallel thinking.
 
  • #97
wrong...there is pseudo parallel, (multithreading,parallel computing)granted it may be slower then realtime but it still exists. and i believe that certain companies are in the midst of developing parallel computers...look at your soundcard/videocard/cpu they run on separate hardware.
 
  • #98
The problem with the chinese room example is that you're trying to argue in favor of a "soul". Could you please show me on a model of the human body where the soul is? Which organ is that attached to?

It's hard to replicate or emulate something that doesn't exist. We as humans are influenced by emotions. Emotions can be programmed. That is our "soul" as keeps being tossed about. However to go with it, we could suppose that it is our "soul" that allows us to feel empathy, pity, joy, sadness, etc. That's the "soul" you refer to, and it's possible to duplicate emotions. We use all of our senses to "understand" the world we live in. We learn from birth how the world works, when it is appropriate to be happy, sad, angry, etc. I believe that given sufficient technology if a machine were "born" with the same senses as human beings, they could so closely replicate human behavior, intuitiveness, and intelligence as to be indistinguishable from the real thing.

An argument was made that a computer couldn't emulate human behavior because a machine can't exceed it's programming. Well a computer can have a "soul" if we program it with one. I agree that we still do not fully understand our own thought processes and how emotions affect our decisions, but that doesn't mean we won't someday. And if we can understand it, we can duplicate it. Someone else computers said can't "understand" human behavior. I have to repeat hypnagogue-

What does it mean to "understand"?

If we teach them what we want them to know, and give them the same faculties as we possess, they will inevitably "understand". If we tell a sufficiently advanced computer something like "when someone dies, they are missed- this is sad", eventually they would understand. Teaching through example is fundamental to human understanding.

I think there is a chasm here, but it's a spiritual one, not a technological one. If you leave the chinese man in the room with the alphabet and the ruleset he may never learn chinese. But if you translate one sentence into english for him, and give him sufficient time, eventually he will read chinese fluently. Does the fact that he needed to be helped to read the chinese change anything? In some things a machine is lacking (ie has to be taught emotions instead of being born with them.) But in some instances it is more advanced (doesn't get tired, doesn't forget, etc.) A machine will never actually "be" a himan being because one is created naturally, the other artificially. However, this does not mean that a computer can't "understand" what it is to be human.

Let's narrow it down. If you could attach a functioning human brain to a humanistic robot, with all the 5 senses, and allow that brain to operate all those senses, does this "machine" have a soul? Let's say it was the brain of a human child, without any experience as a human being- How would that creation develop? Would it learn the same way it would in a human body? If a machine could process touch, taste, smell, sight, in the same way humans do, wouldn't it basically have the same "understanding" as a human?

I believe the problem is that most people have trouble with the concept of a machine that can duplicate the human experience- It may be sci-fi today, but in 100 or 200 years, may be childsplay. People in their minds conceptualize a machine as incapable of human understanding because regardless of the cpu, it does not have the 5 senses. Because the AI of today is so childlike. When AI has advanced to the point where if you give it a translation of english to latin, it can not only understand latin, but every other language in the world, and then create it's own linguistics, that will be a machine capable of understanding. And I think that type of intelligence scares people. Because then, we are the children.

EDIT: I believe the original question has been answered- machines can exceed humans in intelligence- why? because you can always build a better computer- we still haven't been able to improve the human brain. Not only that, but last I checked you couldn't connect multiple brains to process info simultaneously.

Therefore, the prominent questions remain: "can machines feel? can machines have a soul?"

EDIT 2: I've been thinking about this gap of emotional understanding. We can program a computer to show mercy, but will it understand why it shows mercy? The answer is a complexed one. We have to show it, through example, why showing mercy is compassion. We have to teach it why there are benefits to themselves to do such things. Things have to be taught to machines which to us are beyond simplistic. However a machine would not kill except in self defense. Emotions are simultaneously our strengths and our weaknesses. But they can be taught.
 
Last edited:
  • #99
you should perhaps read up on jeff hawkins theory of intelligence, and also should read his book "on intelligence"
i plan on designing something according to those lines
 
  • #100
Zantra said:
The problem with the chinese room example is that you're trying to argue in favor of a "soul".

Actually, I'm primarily using it to argue against strong AI. But I suppose it might be used to argue in favor of the soul. Still, there doesn't seem to be anything wrong with the Chinese argument: the person obviously does not understand Chinese.

Could you please show me on a model of the human body where the soul is? Which organ is that attached to?

Probably the physical brain (at least, that's where it seems to interact).


Well a computer can have a "soul" if we program it with one.

I doubt it. Given what we've learned from the Chinese room, how could machines possibly understand? You'd need something other than complex rules manipulating input for literal understanding to exist, as the Chinese room story demonstrates. What could the designer possibly add to make a computer understand? A magical ball of yarn?

If we teach them what we want them to know, and give them the same faculties as we possess, they will inevitably "understand".

But given the story of the Chinese room, that doesn't seem possible in principle.

I think there is a chasm here, but it's a spiritual one, not a technological one. If you leave the chinese man in the room with the alphabet and the ruleset he may never learn chinese. But if you translate one sentence into english for him, and give him sufficient time, eventually he will read chinese fluently.

Great, but that doesn't help the strong AI thesis. It's easy to say understanding is possible when we already have an fully intelligent, thinking, sentient human being. But you seem to forget that I am attacking the idea that it’s possible to have a fully intelligent, thinking, sentient computer in the first place. My point is that if we limit ourselves to the input/output processing like we do in the Chinese room, we simply don't get real understanding. The man may be capable of learning a new language, but this clearly requires something other than a complex set of rules for input/output processing. My question: so what else do you have?


Let's narrow it down. If you could attach a functioning human brain to a humanistic robot, with all the 5 senses, and allow that brain to operate all those senses, does this "machine" have a soul?

The human brain does.

Let's say it was the brain of a human child, without any experience as a human being- How would that creation develop? Would it learn the same way it would in a human body? If a machine could process touch, taste, smell, sight, in the same way humans do, wouldn't it basically have the same "understanding" as a human?

I suppose so, given that this is a brain of a human child. But this still doesn't solve the problem of the Chinese room, nor does it imply that a computer can be sentient. Computer programs use a complex set of instructions acting on input to produce output. As the Chinese room illustrates, that is not sufficient for understanding. So what else do you have?

People in their minds conceptualize a machine as incapable of human understanding because regardless of the cpu, it does not have the 5 senses.

It's difficult to see why that would make a difference. We already have cameras and microphones which can be plugged into a computer, for instance. Machines can convert the sounds and images to electrical signals, 1s and 0s, process them according to written instructions etc. but we still have the same problem that the Chinese room points out.
 
  • #101
neurocomp2003 said:
tishammer: heh i think u may need to define your definition of a rule. Is it like a physics based rule-where interaction, collisions and forces are domintate or a math/cs based rule where logic is more prevalent.

In the case of machines and understanding, it's more like a metaphysical principle, like ex nihilo nihil fit.
 
  • #102
Tisthammerw said:
Actually, I'm primarily using it to argue against strong AI. But I suppose it might be used to argue in favor of the soul. Still, there doesn't seem to be anything wrong with the Chinese argument: the person obviously does not understand Chinese.

Your analogy has holes in it. Regardless of weather the man can understand chinese, machines CAN understand us. It may not be able to empathize, but it understands the structure of things to the same degree as we do. You have to define for me exactly what it doesn't understand- exactly what it is that cannot be taught, because by my definition, you can teach a machine anything that you can teach a human. Give me one example of something you can't teach a machine. The chinese room springs from the notion that if something isn't inherently human by design, it cannot "understand" humanistic behavior- I think this is false. There is purpose behind each human emotion- it doesn't follow logic, but a computer can be taught to disregard logic when faced with an emotional situation.

Probably the physical brain (at least, that's where it seems to interact).

The brain is composed of synapses, dendrites, and action potentials- so I think we can agree that no where in the brain has any scan ever revealed a particular region of of the brain that is the "soul". That is spirituality. I'm taking this from a totally scientific POV, which means that you can no more prove you have a soul than you can prove the machine doesn't have one.



I doubt it. Given what we've learned from the Chinese room, how could machines possibly understand? You'd need something other than complex rules manipulating input for literal understanding to exist, as the Chinese room story demonstrates. What could the designer possibly add to make a computer understand? A magical ball of yarn?

What does literal understanding encompass? Are you simply stating that to be us is to know us? That regardless of any superior intellect capabilities it is beyond anyone to truly understand us unless they are us? If so, that's very presumptive and not realistic. If you put a CPU into a human body--to reverse the notion--it will be able to fully comprehend what it is to be human. It's said that the sum total of a person's memories can fit onto about 15 petabytes of drive space. That's about 10 years off-maybe. When you can transfer someone's entire mind to a computer, does that memory loose it's soul? If an advanced AI computer analyzes this, would it not understand? All humanistic understand requires is a fram of reference to be understood.

Great, but that doesn't help the strong AI thesis. It's easy to say understanding is possible when we already have an fully intelligent, thinking, sentient human being. But you seem to forget that I am attacking the idea that it’s possible to have a fully intelligent, thinking, sentient computer in the first place. My point is that if we limit ourselves to the input/output processing like we do in the Chinese room, we simply don't get real understanding. The man may be capable of learning a new language, but this clearly requires something other than a complex set of rules for input/output processing. My question: so what else do you have?

Well if you can't teach him chinese, you will just have to take him to china :wink:

I suppose so, given that this is a brain of a human child. But this still doesn't solve the problem of the Chinese room, nor does it imply that a computer can be sentient. Computer programs use a complex set of instructions acting on input to produce output. As the Chinese room illustrates, that is not sufficient for understanding. So what else do you have?

That's the way current AI operates. In the future this may not always be the case. I've been reading some of Jeff Hawkin's papers- interesting stuff. If you change the way a computer processes the information, it may be capable of learning the same way we do, through association. The Chinese room is a dilema. I'm not suggesting that the chinese room is wrong exactly. I'm just saying we need to change the rules of the room so that he can learn in chinese(human). The funny part is that this debate is a step backwards in evolution. We can teach a machine to understand why humans behave the way we do, but why would we want to teach them to "BE" human? Humans make mistakes. Humans do illogical things that don't make sense. Humans get tired, humans forget, humans get angry and jealous. Machines do none of those things. The purpose of machines is to assist us, not to take our place.

That being said I believe that if we change the way machines process inpu progress can be made. As far as how we get from point a to point b, that I can't answer.
 
  • #103
Zantra said:
Your analogy has holes in it.

Please tell me what they are.


Regardless of weather the man can understand chinese, machines CAN understand us.

That seems rather question begging in light of the Chinese room thought experiment. As I said in post #56 (p. 4 of this thread):


Will a computer program ever be able to simulate intelligence (e.g. a human conversation)? If technological progress continues, I am certain that it will. But at its heart computer programs are nothing more than giant rulebooks manipulating bits of data (1s and 0s). So while a program may be a successful natural language processor, it will never understand language any more than the man in the Chinese room understands Chinese.

The Chinese room thought experiment shows that a computer program clearly requires something other than a complex set of rules manipulating input. But what else could a computer possibly have to make it have understanding?

Feel free to answer that question (I haven't received much of an answer yet).


You have to define for me exactly what it doesn't understand

Assuming "it" means a computer, a computer cannot understand anything. It may be able to simulate conversations etc. via a complex set of rules manipulating input, but it cannot literally understand the language anymore than the person in the Chinese room understands Chinese.


- exactly what it is that cannot be taught

It can be metaphorically taught; the program can be made so it changes itself based on the input it receives. But as I illustrated, this does not imply literal understanding. Note an example conversation of the Chinese room (translated into English):

Human: How are you doing?
Room: Just fine. What is your name?
Human: My name is Bob.
Room: Hello Bob.
Human: You've learned my name?
Room: Yes.
Human: What is it?
Room: Bob.

Learning has metaphorically taken place, and yet the person in the room really doesn't know the person's name; in fact he dosn't understand anything at all regarding this conversation. The problem is that "learning algorithms" are just another set of instructions, thus really not anything fundamentally different from the Chinese room (the man using a complex set of instructions) and not at all an answer to the question "what else do you have?" besides a complex set of instructions acting on input for the computer to have literal understanding.


The brain is composed of synapses, dendrites, and action potentials- so I think we can agree that no where in the brain has any scan ever revealed a particular region of of the brain that is the "soul".

Well, duh. The soul is a nonphysical entity. I'm saying that is where the soul interacts with the physical world.


I'm taking this from a totally scientific POV, which means that you can no more prove you have a soul than you can prove the machine doesn't have one.

I wouldn't say that.


What does literal understanding encompass?

I use no special definition. Understanding means "to grasp the meaning of."


Are you simply stating that to be us is to know us? That regardless of any superior intellect capabilities it is beyond anyone to truly understand us unless they are us?

The answer to both questions is no. Now how about answering my question? We'd need computers to have something else besides a set of instructions (however complex) manipulating input to overcome the point the Chinese room makes. It's difficult to conceive how that could even be theoretically possible. What could we possibly add to the computer to make it literally understand? A magic ball of yarn? A complex arrangement of bricks? What?


It's said that the sum total of a person's memories can fit onto about 15 petabytes of drive space. That's about 10 years off-maybe. When you can transfer someone's entire mind to a computer, does that memory loose it's soul?

As for a person's literal mind, I don't even know if that's possible. But if you're only talking about memories--raw data--than I would say the person's soul is not transferred.


If an advanced AI computer analyzes this, would it not understand?

If all this computer program does is use a set of instructions to mechanically manipulate the 1s and 0s, then my answer would be "no more understanding than the man in the Chinese room understands Chinese." Again, you're going to need something other than rules manipulating data here.


If you change the way a computer processes the information, it may be capable of learning the same way we do, through association.

But if you're still using the basic principle of rules acting on input etc. that won't get you anywhere. A variant of the Chinese room also changes the way input is processed. But the man still doesn't understand Chinese.


I'm not suggesting that the chinese room is wrong exactly. I'm just saying we need to change the rules of the room so that he can learn in chinese(human).

Absolutely, but note that this requires something other than a set of rules manipulating input. It's easy to say learning is possible when we already have an fully intelligent, thinking, sentient human being. But you seem to forget that I am attacking the idea that it’s possible to have a fully intelligent, thinking, sentient computer in the first place. My point is that if we limit ourselves to the input/output processing like we do in the Chinese room, we simply don't get real understanding. And as my modified Chinese room illustrates, we can't get real understanding even with learning algorithms. So I ask again, what else do you have?
 
  • #104
Tist, You seem to make quite a few assumtions without much other reason than "There must be something more."
Why exactly must it be that there is something more?
Why is a complex, mutable, and rewritable system of rules not enough to process information like a human? What does the soul do that is different than this?
Your soul argument is just a homunculus. There is a small insubstancial being inside of us that does all of the "understanding" or real information processing.
Lets hit that first. What is "understanding" if not a manner of processing information? You say that "understanding" is required for meaningful output. Then please illucidate us as to the process that is understanding. How is understanding different than processing of information via complex rules?
To this I am sure that you will once again invoke your chinese room argument, but your chinese room does not allow the potential AI any of the freedoms of a human being. The man in the chinese room is just another homunculus scenario except that you have made the assumption that this one is apparently incapable of this magical metaphysical property that you refer to as "understanding" (a magic ball of yarn in the human mind?).
You ask what else do you add to an AI to allow it to "understand". I, and others, offered giving your chinese room homunculus a view of the world outside so that the language it is receiving has a context. Also give it the ability to learn and form an experience with which to draw from. You seem to have rejected this as simply more input that the homunculus won't "understand". But why? Simply because the AI homunculus doesn't possesses the same magical ball of yarn that your soul homunculus has? Your soul homunculus is in the same position that the homunculus in the chinese room is. The chinese room homunculus is that function which receives input and formulates a response based on complex rules received from the outside world. Your soul homunculus is in it's own box receiving information from the outside world in some sort of language that the brain uses to express what it sees and hears but you say that it's decision making process is somehow fundamentally differant. Does the soul homunculus not have a set of rule books? Does it somehow already supernaturally know how to understand brainspeak? What is the fundamental difference between the situations of the chinese room homunculus and the soul homunculus?
 
  • #105
Tisthammerw said:
Please tell me what they are.

Ape seems to have beaten me to the punch. If we show the man in the room how to translate chinese that says to me that he is able to understand the language he is working with. No further burden of proof is required. Furthermore, let us assume we not only teach the man how to read chinese, but what the purpose of language is, how it allows us to communicate, etc. The man is capable of learning chinese- that's the assumption. Your assumption is that the rules are static and can't be changed.

That seems rather question begging in light of the Chinese room thought experiment. As I said in post #56 (p. 4 of this thread):

And in response to that I say that the human mind is nothing more than an organic mirror of it's cpu conterpart: processing input, interpreting the data and outputting a response. You're trying to lure me into a "soul debate".

The Chinese room thought experiment shows that a computer program clearly requires something other than a complex set of rules manipulating input. But what else could a computer possibly have to make it have understanding?

Essentially a CPU emulates the human brain in terms of processing information. If AI can learn the "why" behind answers to questions, that to me satisfies the requirement. The better question would be: what is the computer lacking that makes it incapable of understanding to your satisfaction?"

It can be metaphorically taught; the program can be made so it changes itself based on the input it receives. But as I illustrated, this does not imply literal understanding. Note an example conversation of the Chinese room (translated into English):

Human: How are you doing?
Room: Just fine. What is your name?
Human: My name is Bob.
Room: Hello Bob.
Human: You've learned my name?
Room: Yes.
Human: What is it?
Room: Bob.

Learning has metaphorically taken place, and yet the person in the room really doesn't know the person's name; in fact he dosn't understand anything at all regarding this conversation. The problem is that "learning algorithms" are just another set of instructions, thus really not anything fundamentally different from the Chinese room (the man using a complex set of instructions) and not at all an answer to the question "what else do you have?" besides a complex set of instructions acting on input for the computer to have literal understanding.

The room understands that his name is Bob. What more needs to be known about bob? That's an example of a current AI program. I can problably find something like that online. But what if the conversation went a little differently-ie:

Human: how are you today?
Room: I'm lonely. What is your name?
Human: My name is Bob. Why are you lonely?
Room: Nice to meet you Bob. You are the first person I have met in 2 years.
HUMAN: I can understand why you are lonely. Would you like to play a game with me?
Room: I would like that very much.

The computer appears to have more of a "soul". In fact, If we take away the nametags, we could easily assume this is a conversation between 2 people.

Well, duh. The soul is a nonphysical entity. I'm saying that is where the soul interacts with the physical world.

And I'm saying that if a computer has enough experience in the human condition, weather it has a soul or not, doesn't matter- it still understands enough.

I use no special definition. Understanding means "to grasp the meaning of."

Ok then by that definition, a computer is fully capable of understanding. Give me any example of something that a computer can't "understand" and I will tell you how a computer can be taught this, weather by example, experience or just plain old programming. I'm talking about a computer that learns on it's own without being prompted. A computer that sees something it doesn't understand, and takes it upon itsself to deduce the answers using it's available resources. That's true AI.

The answer to both questions is no. Now how about answering my question? We'd need computers to have something else besides a set of instructions (however complex) manipulating input to overcome the point the Chinese room makes. It's difficult to conceive how that could even be theoretically possible. What could we possibly add to the computer to make it literally understand? A magic ball of yarn? A complex arrangement of bricks? What?

You keep alluding to your own magic ball of yarn. What is this magical property that you keep hinting at but never defining? What is this thing that humans have that machines cannot posess? Are you talking about a soul? What is a soul exactly? How about curiosity? If we design a machine that is innately curious, doesn't that make him strikingly human in nature?

If all this computer program does is use a set of instructions to mechanically manipulate the 1s and 0s, then my answer would be "no more understanding than the man in the Chinese room understands Chinese." Again, you're going to need something other than rules manipulating data here.

Then what do humans use to process data? How do we interact with the world around us? We process sensory input(IE data) and we process the information in our brains (CPU) then react to that processed data accordingly(output). What did I miss about the human process?

But if you're still using the basic principle of rules acting on input etc. that won't get you anywhere. A variant of the Chinese room also changes the way input is processed. But the man still doesn't understand Chinese.

But yet you still refuse to teach the guy how to read chinese. Man he must be frustrated. Throw the guy a bone :wink:

Absolutely, but note that this requires something other than a set of rules manipulating input. It's easy to say learning is possible when we already have an fully intelligent, thinking, sentient human being. But you seem to forget that I am attacking the idea that it’s possible to have a fully intelligent, thinking, sentient computer in the first place. My point is that if we limit ourselves to the input/output processing like we do in the Chinese room, we simply don't get real understanding. And as my modified Chinese room illustrates, we can't get real understanding even with learning algorithms. So I ask again, what else do you have?

You have to change your way of thinking. sentience can be had in a machine. Can I tell you how to go out and build one? Can I tell you how something like this will be accomplished? No. But this isn't science fiction, it is science future. It's hard to see how to launch rockets into space when we've just begun to fly- we're still at Kitty Hawk. But in time it will come.
 
Last edited:

Similar threads

  • Computing and Technology
Replies
1
Views
267
  • General Discussion
Replies
9
Views
1K
  • General Discussion
Replies
21
Views
1K
Replies
10
Views
1K
  • General Discussion
Replies
4
Views
662
  • General Discussion
Replies
3
Views
816
  • Sci-Fi Writing and World Building
Replies
0
Views
739
Replies
1
Views
3K
  • General Discussion
Replies
21
Views
2K
Replies
10
Views
2K
Back
Top