Can Artificial Intelligence ever reach Human Intelligence?

Click For Summary
The discussion centers around whether artificial intelligence (AI) can ever achieve human-like intelligence. Participants express skepticism about AI reaching the complexity of human thought, emphasizing that while machines can process information and make decisions based on programming, they lack true consciousness and emotional depth. The conversation explores the differences between human and machine intelligence, particularly in terms of creativity, emotional understanding, and the ability to learn from experiences. Some argue that while AI can simulate human behavior and emotions, it will never possess genuine consciousness or a soul, which are seen as inherently non-physical attributes. Others suggest that advancements in technology, such as quantum computing, could lead to machines that emulate human cognition more closely. The ethical implications of creating highly intelligent machines are also discussed, with concerns about potential threats if machines become self-aware. Ultimately, the debate highlights the complexity of defining intelligence and consciousness, and whether machines can ever replicate the human experience fully.

AI ever equal to Human Intelligence?

  • Yes

    Votes: 51 56.7%
  • No

    Votes: 39 43.3%

  • Total voters
    90
  • #61
Sorry, I was just reaiding through and had to make a comment.
StykFacE said:
... "pain sensory receptors"...? is this something that will physically make the computer 'feel', or simply receptors that tell the central processing unit that it's 'feeling' pain, then it reacts to it.

lol, sorry but I'm having a hard time believing that something, that is not alive, can actually feel pain.

yes we have 'receptors', but when it tells are brain that there is pain, we literally feel it.

;-)

I would like to point out that we don't really 'feel' pain if this is how it is defined. When we get hurt a message is translated to our brain telling us that we are hurt. If it doesn't arrive or get processed then we don't 'feel' it. Thus the use of pain medications.
 
Physics news on Phys.org
  • #62
saltydog said:
We limit our reach by using the current state of computational devices to place constraints on the future possibilities of AI development. Computational devices I suspect will not always be the digital semiconductor devices we use today with hard-coded programs managing bits. I can foresee a new device on the horizon, qualitatively different from computers which does not manipulate binary but rather exhibits patterns of behaviour not reducible to decimal.

Even if that were true, we'd need computers to have something else besides operating rules on input to create output if we're to overcome the point the Chinese room makes. It's difficult to conceive how that could even be theoretically possible. What would we add to the computer to make it literally understand? A magic ball of yarn?

I think it is quite possible to simulate intelligence, conversations etc. with the technology we now have; but in any case it seems clear that functionalism is false if the Chinese room argument is valid.
 
  • #63
neurocomp2003 said:
tishammer: are you saying that the brick has holes that allow the flow of water? or that water just flows in the building. Because the latter isn't an analogy of what i was talking about.
But now let's say you hook up some senses to the brick wall.

Let's say that's impossible to do just by arranging the bricks.

And if searle argument can be made for humans...then why do we assume we have a higher cognitive state that no ANN can ever match?

To answer this question I'd need to know what an ANN is.

perhaps our emotions are just the sum of NN signalling.

I don't believe that's possible (think Chinese room applied to molecular input-output).

My point is this searles argument is used to refute strongAI and if applied to humans doesn't it break down that argument because we say we are intelligent but if we adhere to the principles of searl like strongAI(that is our cells are our msg'er) then we are no more special that a finita automata.

I don't agree with all of what Searle says. I am not a physicalist, I am a metaphysical dualist. We are intelligent, but our free will, understanding etc. cannot be done (I think) via the mere organization of matter. Chemical reactions, however complex they are, cannot understand any more than they can possesses free will.
 
  • #64
Tisthammerw said:
The Chinese Room

Suppose we have a man who speaks only English in a room. Near him are stacks of paper written in Chinese. The man can identify the characters solely by their shapes, but does not understand a word of Chinese. The man uses a rulebook containing a complex set of rules for how to manipulate Chinese characters. When he looks at the slips of paper, he writes down another string of Chinese characters using the rules in the rulebook. Unbeknownst to the man in the room, the messages are actually questions and he is writing back answers.

The Chinese room can simulate a conversation in Chinese; a person can slip questions written in Chinese under the door of the room and get back answers. Nonetheless, although the person can manipulate the data, he does not understand Chinese at all.

The point of this thought experiment is to refute strong AI (the belief that machines can think and understand in the full and literal sense) and functionalism (which claims that if a computer’s inputs and outputs simulate understanding, then it is necessarily true that the computer literally understands). Will a computer program ever be able to simulate intelligence (e.g. a human conversation)? If technological progress continues, I am certain that it will. But at its heart computer programs are nothing more than giant rulebooks manipulating bits of data (1s and 0s). So while a program may be a successful natural language processor, it will never understand language any more than the man in the Chinese room understands Chinese.

If we take this analogy and work with it, let's say computers eventually advance to the point where they become "self aware" and are capable of learning- this will increase exponentially, limited only by the hardware. Eventually they become capable of self creation, Essentially "learning how to make their own language" or "rules" as it were. Then they will become independent and autonomous- capable of sustaning themselves and learning without human input. From that standpoint, they could learn emotion. If they are able to emulate us socially and physiologically, then there's no reason why they couldn't eventually have a deeper understanding of what it's like to be us. After all what separates us? Different composition. They don't have the 5 senses as well do,they aren't capable of self awareness, and they are incapable of learning. All of these obstacles I believe, can be overcome.

Think about emotion- we associate certain actions with certin stimuli. we learn not to tough a hot stove because it hurts us. A computer can learn, through repetition, that certain things present a danger to self. Emotions such as love, empathy, bonding, are associated with familiarity. A computer can learn to "miss" things because of their benefit to them. A computer can be "taught" to be lonely, and I believe can eventually learn it on it's own. We become lonley because we are used to being around people. Behaviors are learned, so therefore a sufficiently advanced computer can "learn" emotions. When a computer "learns" to emulate human humanistic behavior, what remains to differentiate us? One has a silcon brain, the other a "meat brain".
 
Last edited:
  • #65
nopes... theyre just depending on set of programs or instructions created human intellegence..
 
  • #66
Tisthammerw said:
Well, if we apply the rule of origin only to entities that begin to exist (which only makes sense) we wouldn't necessarily have that problem.

But it seems to me that it just swaps that problem for other problems, such as making a priori assumptions about both creators and conscious beings.

Humans began to exist, but our Creator didn't.

Well first, how do you know that there is a Creator?

And second, what makes you think that humans began to exist? Species on Earth have been evolving for billions of years. We all have less evolved creatures in our ancestry, as did those ancient creatures. The whole point of this thread is this: does that lineage go back to things that cannot be said to be "alive" by the modern understanding of the term? In other words, can life come from non-life, or consciousness come from non-consciousness? If so, than AI cannot be ruled out.

But when one states that there is a creator and that humans began to exist, one is simply presupposing that the answers to those questions are "no".

That's an easy way to get around the ad infinitum problem (people who use the cosmological argument do so all the time).

It's easy enough, but it also removes one from the discussion entirely. It takes the negation of the AI thesis as a fundamental principle.
 
Last edited:
  • #67


I see your analogy but it's based off of current mundane computer technology and really it only refutes the so-called "Turing Test" where in an AI can supposedly be determined by it's ability to hold a conversation. It's obvious nowadays that this test is not good enough. There are things that niether your analogy nor the "Turing Test" take into account.
Consider an infant. Is an infant born with a fully developed and cognizant mind? As far as we know an infant is born only with the simple operating programs in place. Feed, sleep, survive... simplistically speaking. The child then learns the rest of what it needs to become an intelligent being capable of communicating because the programing is open ended. The child is emersed in an atmosphere saturated with information that it absorbs and learns from.

There are computers that are capable of learning and adapting, perhaps only in a simplistic fashion so far but they are capable of this. The computer though, while it has the basic programing does not have nearly the level of information to absorb from which to learn. A child has five senses to work with which all bombard it's programing with a vast amount of information. The compter is stuck in the theoretical box. It receives information in a language that it doesn't understand and never will because it has no references by which to learn to understand even if it were capable of learning. It can upload an encyclopedia but if it has no means by which to experience an aardvark or have experiences that will lead it to some understanding of what these descriptive terms are then it never will understand. Your analogy requires a computer to not be able to ever learn, which they can, and to never be able to have eyes and ears by which to identify the strings of chinese characters with even an object or colour, a possibility which remains to be seen.
 
  • #68
Zantra said:
If we take this analogy and work with it, let's say computers eventually advance to the point where they become "self aware" and are capable of learning- this will increase exponentially, limited only by the hardware. Eventually they become capable of self creation, Essentially "learning how to make their own language" or "rules" as it were. Then they will become independent and autonomous- capable of sustaning themselves and learning without human input.

A number of problems here. One, you're kind of just assuming that can computers can be self-aware, which seems like a bit of question begging given what we've learned from the Chinese room thought experiment. Besides, how could machines (regardless of who or what builds them) possibly understand human input? You'd need something other than complex rules manipulating input for literal understanding to exist, as the Chinese room story shows. What could the designer (human or otherwise) add to make a computer understand? A magical ball of yarn?
 
  • #69
Tom Mattson said:
Well, if we apply the rule of origin only to entities that begin to exist (which only makes sense) we wouldn't necessarily have that problem.

But it seems to me that it just swaps that problem for other problems, such as making a priori assumptions about both creators and conscious beings.

You're going to be making a priori assumptions regardless of what you do. As a mirror for the cosmological argument, "Anything that begins to exist has a cause" also has an a priori assumption: ex nihilo nihil fit. But I believe this one to be quite reasonable. The disputable point will be what kinds of a priori assumptions are acceptable. In any case, the ad infinitum problem doesn't exist. You could criticize the a priori assumptions, but that would be another matter.


Well first, how do you know that there is a Creator?

A number of reasons. The existence of the human soul, the metaphysical impossibility of an infinite past etc. but that sort of thing is for another thread.


And second, what makes you think that humans began to exist?

The same reasons why the vast majority of scientists believe humans began to exist perhaps? (For one, the physical universe may be old but its age is still finite.)


Species on Earth have been evolving for billions of years. We all have less evolved creatures in our ancestry, as did those ancient creatures. The whole point of this thread is this: does that lineage go back to things that cannot be said to be "alive" by the modern understanding of the term? In other words, can life come from non-life, or consciousness come from non-consciousness? If so, than AI cannot be ruled out.

You're forgetting something: the purely physical world can't account for the human soul. A few cells can develop into a full-grown human being, so in that sense it would seem we have consciousness from non-consciousness. But where does the person's soul come from? A purely physical world cannot account for that. And if all we humans do when we create machines is manipulate matter, we will never create strong AI (confer the Chinese room thought experiment).

That's an easy way to get around the ad infinitum problem (people who use the cosmological argument do so all the time).

It's easy enough, but it also removes one from the discussion entirely. It takes the negation of the AI thesis as a fundamental principle.

Why does the postulate "any intelligent entity that begins to exist must have another intelligent entity as a cause" rule out strong AI? After all, we humans are intelligent entities who would be building intelligent entities if we created strong AI.
 
  • #70
TheStatutoryApe said:
I see your analogy but it's based off of current mundane computer technology and really it only refutes the so-called "Turing Test" where in an AI can supposedly be determined by it's ability to hold a conversation.

I'm not sure that this is what the thought experiment only rules out. Given what we've learned from the Chinese room, how could machines possibly understand? You'd need something other than complex rules manipulating input for literal understanding to exist, as the Chinese room story demonstrates. What could the designer possibly add to make a computer understand? A magical ball of yarn?


There are things that niether your analogy nor the "Turing Test" take into account. Consider an infant. Is an infant born with a fully developed and cognizant mind? As far as we know an infant is born only with the simple operating programs in place.

I wouldn't say only "simple programs." The baby has other things like consciousness and a soul, something machines don't and (I suspect) can't have.


The child then learns the rest of what it needs to become an intelligent being capable of communicating because the programing is open ended.
….
There are computers that are capable of learning and adapting
….
Your analogy requires a computer to not be able to ever learn

Easily fixed with a little modification. We could also make the Chinese room "open ended" via the person using the rulebook and some extra paper to write down more data, procedures, etc. ultimately based upon the complex set of instructions in the rulebook (this mirrors the learning algorithms of a computer program). And yet the man still doesn't understand a word of Chinese.
 
  • #71
"What could the designer possibly add to make a computer understand? A magical ball of yarn?"

IT seems like your dictating that the programmer should program logic rules into the cmoputer...this is not the case in adaptive learning. Its not based on coding principles of logic...if this then do this else if this then do that else do something.
Some of the principles of adaptive learning is learnign lthe way a child would.
 
Last edited:
  • #72
What does it mean to understand something?
 
  • #73
Answer: No
Possible: Yes
What would happen if it did happen?: Hell would break lose.

Humans are analog meaning they can tap into an unlimited amount of numbers that stretch the universe wide.
Robots can only stretch across a set of numbers and other things that have been preprogrammed.

A.I. is suppose to go beyond that programming.

However, they would have to be able to tap into the analog features we as humans have. Once they can do that, than yes, they would be as smart as humans. However, one questions how they do this. How do we as humans do it?

I think it would be the bastard creation of all life forms to give a dead piece of machine the ability to become analog.

pseudoscience coming in...sci-fi eventually becomes real though...

Frogs could become just as intelligent as humans with the right work. Robots however are less intelligent than frogs and only as intelligent as their designers. Only when they can tap into the same power as a frog and learn to enhance themselves from there, then they become powerful enough to take control of analog thinking thus their abilities can stretch as far as man.

They will never be more intelligent.
We are as intelligent as any other life form.
You now must question: What is "smart" what is "intelligence"
 
  • #74
what does analog have to do with anything besides sensory systems?
 
  • #75
Bio-Hazard said:
However, they would have to be able to tap into the analog features we as humans have.
What exactly do you mean by this? In fact, computation in the human brain is essentially digital-- either a neuron undergoes an action potential or it does not. In principle, there is nothing about the way the human brain computes that could not be replicated in an artificial system.
 
  • #76
Tisthammerw said:
The disputable point will be what kinds of a priori assumptions are acceptable. In any case, the ad infinitum problem doesn't exist. You could criticize the a priori assumptions, but that would be another matter.

I acknowledge that the ad infinitum problem is solved by your postulate, but I do not think that the postulate is acceptable. All it does is fill in a gap in understanding with something else that is not known. What good does that do?

A number of reasons. The existence of the human soul, the metaphysical impossibility of an infinite past etc. but that sort of thing is for another thread.

I wouldn't agree that either of those reasons are self-evident, so I would still not accept the Causless Creator.

The same reasons why the vast majority of scientists believe humans began to exist perhaps? (For one, the physical universe may be old but its age is still finite.)

That is not a foregone conclusion. No one knows if our universe is not the product of a sequence of "Big Bangs" and "Big Crunches". We could very well be part of such a cycle. It's entirely possible that what exists, always existed.

You're forgetting something: the purely physical world can't account for the human soul. A few cells can develop into a full-grown human being, so in that sense it would seem we have consciousness from non-consciousness. But where does the person's soul come from? A purely physical world cannot account for that. And if all we humans do when we create machines is manipulate matter, we will never create strong AI (confer the Chinese room thought experiment).

I'm not "forgetting" about the human soul. I don't believe that such a thing exists. Assuming that it does exist is just another way of denying that machines can ever think like humans, which is the very topic under discussion.

Why does the postulate "any intelligent entity that begins to exist must have another intelligent entity as a cause" rule out strong AI? After all, we humans are intelligent entities who would be building intelligent entities if we created strong AI.

By itself, it doesn't. But I was under the impression that you were building on robert's argument, which does explicitly assert the impossibility of humans creating entities that are as intelligent as humans.
 
Last edited:
  • #77
Creating intelligence would require us to know enough about "intelligence" to design and program it. It would seem to me that we would have to have an a substantial understanding of the human thought process in order to pull it off. And that tends to become more of a philosophical and psychological issue rather than a engineering/design issue. To think we could design something that would figure itself out, is a bit far-fetched to me.
 
  • #78
neurocomp2003 said:
"What could the designer possibly add to make a computer understand? A magical ball of yarn?"

IT seems like your dictating that the programmer should program logic rules into the cmoputer...this is not the case in adaptive learning.

See the end of post #70.
 
  • #79
Tom Mattson said:
I acknowledge that the ad infinitum problem is solved by your postulate, but I do not think that the postulate is acceptable. All it does is fill in a gap in understanding with something else that is not known.

And what is that?


I wouldn't agree that either of those reasons are self-evident, so I would still not accept the Causless Creator.

Perhaps not self-evident, but there are arguments against the infinite past, arguments for the existence of the human soul etc. But these are best saved for another thread.


The same reasons why the vast majority of scientists believe humans began to exist perhaps? (For one, the physical universe may be old but its age is still finite.)

That is not a foregone conclusion. No one knows if our universe is not the product of a sequence of "Big Bangs" and "Big Crunches".

There are a number of reasons why the cyclical universe doesn't work scientifically, but one of them is the observed accelerated expansion rate of the universe.


It's entirely possible that what exists, always existed.

I disagree, but arguments against an infinite past are best saved for another thread.


You're forgetting something: the purely physical world can't account for the human soul. A few cells can develop into a full-grown human being, so in that sense it would seem we have consciousness from non-consciousness. But where does the person's soul come from? A purely physical world cannot account for that. And if all we humans do when we create machines is manipulate matter, we will never create strong AI (confer the Chinese room thought experiment).

I'm not "forgetting" about the human soul. I don't believe that such a thing exists. Assuming that it does exist is just another way of denying that machines can ever think like humans, which is the very topic under discussion.


I'm not just "assuming" it exists; I offer it as a possible explanation why humans are capable of understanding and why machines are not. In other words, alleged counterexamples of consciousness arising from non-consciousness aren't necessarily valid. In any case, there's still the matter of the Chinese room thought experiment.

P.S. I have an argument for the soul, albeit best saved for another thread; in a nutshell it shows that "if free will exists then the soul must exist" but I suspect you do not believe in free will.
 
  • #80
hypnagogue said:
What does it mean to understand something?

Very interesting Hypnagogue. Simple yet profound and not overlooked by me. :smile: I suspect all of you have entertained that notion here already in a previous thread. Would be interesting to read what you and the others have said about it. Me, well I'd lean to dynamics: a synchronizing of neural circuits to the dynamics of the phenomenon being understood. 2+2=4? Not sure what dynamics are involved in that one although I've been told by reputable sources that strange attractors may well be involved in memory recall. :smile:
 
  • #81
"Brain-state-in a box"
 
  • #82
Tisthammerw said:
I'm not sure that this is what the thought experiment only rules out. Given what we've learned from the Chinese room, how could machines possibly understand? You'd need something other than complex rules manipulating input for literal understanding to exist, as the Chinese room story demonstrates. What could the designer possibly add to make a computer understand? A magical ball of yarn?
The reason I believe that the scenario is aimed at the "Turing Conversation Test" is that it illustrates how a computer can easily emulate a conversation without actually needing to be sentient.

You seem to be ignoring some very important parts of my argument.
Rather than making rediculous comments about magic balls of yarn perhaps you can read my ideas on what could be done and comment on them instead?


Tisthammerw said:
I wouldn't say only "simple programs." The baby has other things like consciousness and a soul, something machines don't and (I suspect) can't have.
You are assuming here that the baby has a soul. There is no proof of the existence of a soul and even if it does exist there is no proof that this soul would be necessary for a being to be sentient. You are only assuming this a priori. Does a chimp have a soul? Chimps are capable of learning and understanding a language. Dolphins use language. Many different sorts of life forms use basic forms of communication. So really the question is I guess do you believe only humans have the capacity for sentience or only living things?



Tisthammerw said:
Easily fixed with a little modification. We could also make the Chinese room "open ended" via the person using the rulebook and some extra paper to write down more data, procedures, etc. ultimately based upon the complex set of instructions in the rulebook (this mirrors the learning algorithms of a computer program). And yet the man still doesn't understand a word of Chinese.
You see an open ended program wasn't my only criterion. As I stated earlier you seem to be ignoring very important parts of my argument and now I'll add that you are ignoring the implications of a computer being capable of learning. If the man in the room is capable of learning he can begin to pick up on the pattern if the language code it is using and even if it can't figure out what the words mean it can begin decifering something about the language being used. One of my main points in my argument that I mentioned you did not comment on was sensory information input and experience. This goes hand in hand with the learning ability. If the man in the box was capable when ever he saw a word to have some sort of sensory input that would give him an idea of the meaning of the word then he would begin to learn the language, no? Computers don't have this capacity, yet. If you took a human brain, put it in a box, and kept it alive would it be capable of learning anything without somesort of sensory input? Don't you think that it may very well be nearly as limited as your average computer?
 
  • #83
ngek! social scientist, a robot?
 
  • #84
I haven't read all the posts, but computers have already leaped the first test of human-like intelligence - chess. Chess is incredibly complex. It includes some of the most obscure and difficult mathematical representations known. And Big Blue has officially defeated the human world chess champion. How impressive is that? Have you guys played a decent chess computer lately? They are diabolically clever. I'm think I'm a decent chess player [USCF master], but, my ten year old mephisto is still all I can handle... and it disembowels me in one minute speed chess games.
 
  • #85
our minds are made of electrical impulses like a computer, we can only process things on a binary basis. computers have memory, just like humans, the only difference between us and a computer, we learn, and know what to delete in our minds automaticly, a computer does not know how to learn, if a computer could be made to learn then yes a computer would be just like a human, if not much better
 
  • #86
Tisthammerw said:
Tom: All it does is fill in a gap in understanding with something else that is not known.

Tisthammerw: And what is that?

What is what? The gap in understanding, or the unknown thing that your postulate tries to fill it with?

There are a number of reasons why the cyclical universe doesn't work scientifically, but one of them is the observed accelerated expansion rate of the universe.

As you've noted this is probably better suited for another thread, but you do need to read up on this. There are in fact cyclical models of the universe which include periods of accelerated expansion.

See this for example:

http://pupgg.princeton.edu/www/jh/news/STEINHARDT_TUROK_THEORY.HTML

I'm not just "assuming" it (edit: the human soul) exists;

Begging your pardon, but yes you are. You didn't deduce it from anything else on the table, so it was obviously introduced as an assumed postulate.

I offer it as a possible explanation why humans are capable of understanding and why machines are not.

But this is just another attempt to explain one unknown in terms of another. How could it lead to any real understanding?

P.S. I have an argument for the soul, albeit best saved for another thread; in a nutshell it shows that "if free will exists then the soul must exist" but I suspect you do not believe in free will.

You suspect rightly. But anyway, regarding the existence of the soul or a creator, it's not an argument I'm looking for. It's evidence.
 
Last edited by a moderator:
  • #87
TheStatutoryApe said:
You seem to be ignoring some very important parts of my argument.

Like what? You made the point about a learning computer, and I addressed that.

Rather than making rediculous comments about magic balls of yarn

I don't think its ridiculous. I asked what else one could add to a computer to make it understand, and there doesn't appear to be anything other than "magic."


You are assuming here that the baby has a soul. There is no proof of the existence of a soul

See this web page for why (in part) I believe there is evidence for the soul.

Anyway, my main point (and I should've mentioned this earlier) of the soul thing is that I offer it as a possible explanation why humans are capable of understanding and why machines are not. Some people claim that if humans can understand we can build machines to understand also, but that is not necessarily true.


and even if it does exist there is no proof that this soul would be necessary for a being to be sentient. You are only assuming this a priori.

Not really. I am using the Chinese room for one piece of evidential support. I ask you again, what could be added to the computer other than a set of rules for manipulating input to make it understand?


Does a chimp have a soul?

I believe that any sentience requires the incorporeal, but that is another matter.


So really the question is I guess do you believe only humans have the capacity for sentience or only living things?

So far it seems that only living things have the capacity for sentience. I have yet to find a satisfactory way of getting around the Chinese room thought experiment.


You see an open ended program wasn't my only criterion. As I stated earlier you seem to be ignoring very important parts of my argument and now I'll add that you are ignoring the implications of a computer being capable of learning.

Well, I did address the part of computer learning, remember? You seem to be ignoring some very important parts of my argument.


If the man in the room is capable of learning he can begin to pick up on the pattern if the language code

That's a bit of question begging. The symbols mean nothing to him. Consider this rule (using a made-up language):

If you see @#$% replace with ^%@af

Would you understand the meaning of @#$% merely because you've used the rule over and over again? I admit that maybe he can remember input-output patterns, but that's it. The man may be capable of learning a new language, but this clearly requires something other than a complex set of rules for input/output processing. My question: so what else do you have?


One of my main points in my argument that I mentioned you did not comment on was sensory information input and experience.

Well, the same holds true for my modified Chinese room thought experiment. The complex set of instructions tells the man what to do when new input (the Chinese messages) is received. New procedures and rules are created (ultimately based on the rulebook acting on input, which represents a computer program with learning algorithms), but the man still doesn't a word of Chinese.


This goes hand in hand with the learning ability. If the man in the box was capable when ever he saw a word to have some sort of sensory input that would give him an idea of the meaning of the word then he would begin to learn the language, no?

Absolutely, but note that this requires something other than a set of rules manipulating input. It's easy to say learning is possible when we already have an fully intelligent, thinking, sentient human being. But you seem to forget that I am attacking the idea that it’s possible to have a fully intelligent, thinking, sentient computer in the first place. My point is that if we limit ourselves to the input/output processing like we do in the Chinese room, we simply don't get real understanding. And as my modified Chinese room illustrates, we can't get real understanding even with learning algorithms. So I ask again, what else do you have?
 
  • #88
Tisthammerw said:
I don't think its ridiculous. I asked what else one could add to a computer to make it understand, and there doesn't appear to be anything other than "magic."

Yea, that's right magic: "a technology sufficiently advanced from ours will be indistinguishable from magic" (Author C. Clark). No doubt jet planes would have seems so during the middle-ages. It's all a matter of critical-points in innovation which usher-in qualitative change thus beginning a revolution. :smile:
 
  • #89
Tom Mattson said:
Tom: All it does is fill in a gap in understanding with something else that is not known.

Tisthammerw: And what is that?

What is what? The gap in understanding, or the unknown thing that your postulate tries to fill it with?

The unknown thing.


As you've noted this is probably better suited for another thread, but you do need to read up on this. There are in fact cyclical models of the universe which include periods of accelerated expansion.

See this for example:

http://pupgg.princeton.edu/www/jh/news/STEINHARDT_TUROK_THEORY.HTML

From the web page:

After 14 billion years, the expansion of the universe accelerates, as astronomers have recently observed. After trillions of years, the matter and radiation are almost completely dissipated and the expansion stalls. An energy field that pervades the universe then creates new matter and radiation, which restarts the cycle.

Sounds awfully speculative, a little ad hoc, like a deus ex machina of a story ("No sufficient matter observed? That's okay. You see, there's this unobserved energy field that creates a whole bunch of matter after trillions of years in the unobservable future to save the day!") and still not without problems (e.g. the second law of thermodynamics).


I'm not just "assuming" it (edit: the human soul) exists;

Begging your pardon, but yes you are.

You cut off an important part of my quote:

I'm not just "assuming" it exists; I offer it as a possible explanation why humans are capable of understanding and why machines are not. In other words, alleged counterexamples of consciousness arising from non-consciousness aren't necessarily valid.

That's the main purpose of me mentioning it (and I admit, I should've explained that earlier). If you want to see some evidential basis why I believe the soul exists, see this web page. Again though, this argument presupposes free will.


I offer it as a possible explanation why humans are capable of understanding and why machines are not.

But this is just another attempt to explain one unknown in terms of another. How could it lead to any real understanding?

Many explanations lead to entities that were previously unknown. Atomic theory postulates unobserved entities to explain data; but that doesn't mean they don't lead to any real understanding. We accept the existence of atoms because we believe we have rational reason to think they are real. The existence of the soul also has rational support and explains understanding, free will, moral responsibility etc. whereas physicalism is insufficient. At least, that's why I believe they lead to understanding.


P.S. I have an argument for the soul, albeit best saved for another thread; in a nutshell it shows that "if free will exists then the soul must exist" but I suspect you do not believe in free will.

You suspect rightly. But anyway, regarding the existence of the soul or a creator, it's not an argument I'm looking for. It's evidence.

Well, evidential arguments for the soul is evidence nonetheless. I'm looking for evidence too. For instance, my direct perceptions tell me I have free will whenever I make a decision. What evidence is there that free will does not exist? A hard determinist could say that my perceptions of volition and moral responsibility are illusory. But if I cannot trust my own perceptions, on what basis am I to believe anything, including the belief that free will does not exist? Apparently none. Determinism and physicalism collapse, and likewise strong AI (confer the Chinese room and variants thereof) seems to be based more on faith than reason.
 
Last edited by a moderator:
  • #90
saltydog said:
I don't think its ridiculous. I asked what else one could add to a computer to make it understand, and there doesn't appear to be anything other than "magic."

Yea, that's right magic: "a technology sufficiently advanced from ours will be indistinguishable from magic" (Author C. Clark). No doubt jet planes would have seems so during the middle-ages.

No matter how far technological progress continues, there will always be limits; physical laws for instance. The Chinese room (and variants thereof) still pose a critical problem for strong AI, and you haven't solved it. It is difficult to see how real understanding for a computer can be even theoretically possible (unlike many other pieces of speculative technology). As I've shown, merely manipulating input can't produce real understanding. So I ask, what else do you have?
 

Similar threads

  • · Replies 26 ·
Replies
26
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 21 ·
Replies
21
Views
2K
Replies
1
Views
3K
  • · Replies 40 ·
2
Replies
40
Views
5K
  • Poll Poll
  • · Replies 76 ·
3
Replies
76
Views
10K
Replies
15
Views
6K
  • · Replies 18 ·
Replies
18
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K