Can Artificial Intelligence ever reach Human Intelligence?

AI Thread Summary
The discussion centers around whether artificial intelligence (AI) can ever achieve human-like intelligence. Participants express skepticism about AI reaching the complexity of human thought, emphasizing that while machines can process information and make decisions based on programming, they lack true consciousness and emotional depth. The conversation explores the differences between human and machine intelligence, particularly in terms of creativity, emotional understanding, and the ability to learn from experiences. Some argue that while AI can simulate human behavior and emotions, it will never possess genuine consciousness or a soul, which are seen as inherently non-physical attributes. Others suggest that advancements in technology, such as quantum computing, could lead to machines that emulate human cognition more closely. The ethical implications of creating highly intelligent machines are also discussed, with concerns about potential threats if machines become self-aware. Ultimately, the debate highlights the complexity of defining intelligence and consciousness, and whether machines can ever replicate the human experience fully.

AI ever equal to Human Intelligence?

  • Yes

    Votes: 51 56.7%
  • No

    Votes: 39 43.3%

  • Total voters
    90
  • #51
StykFacE said:
of course it's my problem... lol, i was insulted and there's nothing i can do really. and he wasn't trying to establish common grounds, he was challenging my education against his own.

Good grief.

No, he wanted to know what you know so that he could determine how to answer, or whether to answer at all. As a rule it's best not to assume the worst about people.

If you still feel that this issue is unresolved, then continue it via the private message system. All further posts along this line of discussion will be deleted.
 
Last edited:
Physics news on Phys.org
  • #52
Thank you tom,

Stykface: like tom said i was trying establish a basis of what terms you know...if i don't know where to begin then i would more than likely start at child development and neural nets. Or if you want spiking neurons/nonlinear dynamics-though i myself am only abeginner when it comes to these fields.

but yeah not once did i take a stab at your intelligence. If you equate intelligence with the knowledge, well then umm i don't know what to say. Knowledge-base is different for everyone and therefore cannot compare intelligence based on knowledge alone. IMO-intellgence is based not on what you know but how fast you learn OR the capability to which you can apply newly learned things.

And well i use to have high respect for auto cad users, because most of them have to think interms of schematic 3D.

EDIT: sorry tom, i was posting while you posted the above post...sorry.
Oh and that dialogue post was funny as hell.
 
  • #53
It would be impossible to program an AI as intelligent as us. The only way an AI as intelligent as us could be created is by something more intelligent than us.
 
  • #54
robert said:
It would be impossible to program an AI as intelligent as us. The only way an AI as intelligent as us could be created is by something more intelligent than us.

But why should that be? Why can't an intelligence appear due to self-organization?

Also, let's look at this argument schema:

An entity with the intelligence of X can exist iff it is created by an intelligence that is greater than that of X.

Just substitute "X=homo sapiens" and what happens? It follows that the only way that human intelligence can exist is if a greater intelligence created human intelligence.

Then one would naturally ask, "So what created the intelligence of our creator?"

This slippery slope would go on ad infinitum.
 
  • #55
Well me too:

Termites don't know what they're building: The clay cathederal emerges from the mud from local interractions between mud, termite, and pheremone.

What's this to do with intelligence, AI and man? Stars don't either. :smile:
 
Last edited:
  • #56
neurocomp2003 said:
tsishammer: but you see humans have sensory systems that feed into the the brain, and the entire human system flows..does a stack of brick walls flow?

Well, suppose the building has water that flows places. Is the building conscious? Is it capable of understanding?

My point of the "brick building" argument was to illustrate why some people (including me) believe it is implausible that consciousness, understanding, etc. can be brought about by the mere organization of matter.


perhaps from a philosophical standpoint and that the adaptation that a brick/wall has accustom to is to not respond at all. The entire of an artificial system is teh concept of fluidic motion of signals to represent a pattern (like in steve grand's book). and where not talking about a few hundred atoms here we are talking about:
(~100billion neurons*#atom/neuron+~10000*#neurons*#atom/synapse)
Thats how many atoms are in teh brain and mostlikely a rough guess would be
10^(25-30) atoms. Try counting that high.

Suppose we have trillions of bricks. Will the building be conscious?

As for john searle's highly used argument: this can also be applied to humans' but because we have such big egos we do not adhere to it.

Well, I agree. It can be applied to humans. So what?

The Chinese Room

Suppose we have a man who speaks only English in a room. Near him are stacks of paper written in Chinese. The man can identify the characters solely by their shapes, but does not understand a word of Chinese. The man uses a rulebook containing a complex set of rules for how to manipulate Chinese characters. When he looks at the slips of paper, he writes down another string of Chinese characters using the rules in the rulebook. Unbeknownst to the man in the room, the messages are actually questions and he is writing back answers.

The Chinese room can simulate a conversation in Chinese; a person can slip questions written in Chinese under the door of the room and get back answers. Nonetheless, although the person can manipulate the data, he does not understand Chinese at all.

The point of this thought experiment is to refute strong AI (the belief that machines can think and understand in the full and literal sense) and functionalism (which claims that if a computer’s inputs and outputs simulate understanding, then it is necessarily true that the computer literally understands). Will a computer program ever be able to simulate intelligence (e.g. a human conversation)? If technological progress continues, I am certain that it will. But at its heart computer programs are nothing more than giant rulebooks manipulating bits of data (1s and 0s). So while a program may be a successful natural language processor, it will never understand language any more than the man in the Chinese room understands Chinese.
 
  • #57
saltydog said:
The complex stack of bricks is static. Nothing happens. Same dif with neurons if they were static. The point is that neurons are dynamic.

True, but even if the bricks jiggled like jello the arrangement still wouldn't understand anything. For the Chinese Room, see post #56. Is the room dynamic? Sure. But the fellow still doesn't understand Chinese.

Star Trek has nothing to do with this.

The phrase "emergent property" regarding that sort of thing was used in an episode of Star Trek: TNG.
 
  • #58
Tom Mattson said:
But why should that be? Why can't an intelligence appear due to self-organization?

Also, let's look at this argument schema:

An entity with the intelligence of X can exist iff it is created by an intelligence that is greater than that of X.

Just substitute "X=homo sapiens" and what happens? It follows that the only way that human intelligence can exist is if a greater intelligence created human intelligence.

Then one would naturally ask, "So what created the intelligence of our creator?"

This slippery slope would go on ad infinitum.

Well, if we apply the rule of origin only to entities that begin to exist (which only makes sense) we wouldn't necessarily have that problem. Humans began to exist, but our Creator didn't. That's an easy way to get around the ad infinitum problem (people who use the cosmological argument do so all the time).
 
  • #59
Tisthammerw said:
Will a computer program ever be able to simulate intelligence (e.g. a human conversation)? If technological progress continues, I am certain that it will. But at its heart computer programs are nothing more than giant rulebooks manipulating bits of data (1s and 0s). So while a program may be a successful natural language processor, it will never understand language any more than the man in the Chinese room understands Chinese.

We limit our reach by using the current state of computational devices to place constraints on the future possibilities of AI development. Computational devices I suspect will not always be the digital semiconductor devices we use today with hard-coded programs managing bits. I can foresee a new device on the horizon, qualitatively different from computers which does not manipulate binary but rather exhibits patterns of behaviour not reducible to decimal.

I've used the analogy of a matrix of butterflies elsewhere: A large matrix with millions of butterflys set at each matrix point flapping their wings. Patterns emerge from the beating: sometimes it's chaotic, other times waves of patterns spread through the matrix. The butterflies respond to stimulus: wind, mating, food supply. A predator approaches the matrix causing the flapping to exhibit a particular pattern of beating as the matrix, in a very simple sense, becomes conscious of the predator. Later, by random chance or otherwise, this same pattern emerges again in the matrix . . . it remembers.

I know that's weird to some, dumb to others. Discovery comes from the strangest of places. :smile:
 
  • #60
tishammer: are you saying that the brick has holes that allow the flow of water? or that water just flows in the building. Because the latter isn't an analogy of what i was talking about.
But now let's say you hook up some senses to the brick wall. so that it could reallly detect thle "outside" world and then allowed to interact. You got to remember the brain isn't grown in one day. I highly doubt a baby without a brain will ever grow conscious. But that is an amoral experiment

And if searle argument can be made for humans...then why do we assume we have a higher cognitive state that no ANN can ever match? perhaps our emotions are just the sum of NN signalling. My point is this searles argument is used to refute strongAI and if applied to humans doesn't it break down that argument because we say we are intelligent but if we adhere to the principles of searl like strongAI(that is our cells are our msg'er) then we are no more special that a finita automata.

ALSO in regards to your post to tom it doesn't make sense to slap a set of new rules of beginnings for the first creator? THat he magically existed...unless your saying that their being evolved from the physical fundamentals that exist in our universe.
 
  • #61
Sorry, I was just reaiding through and had to make a comment.
StykFacE said:
... "pain sensory receptors"...? is this something that will physically make the computer 'feel', or simply receptors that tell the central processing unit that it's 'feeling' pain, then it reacts to it.

lol, sorry but I'm having a hard time believing that something, that is not alive, can actually feel pain.

yes we have 'receptors', but when it tells are brain that there is pain, we literally feel it.

;-)

I would like to point out that we don't really 'feel' pain if this is how it is defined. When we get hurt a message is translated to our brain telling us that we are hurt. If it doesn't arrive or get processed then we don't 'feel' it. Thus the use of pain medications.
 
  • #62
saltydog said:
We limit our reach by using the current state of computational devices to place constraints on the future possibilities of AI development. Computational devices I suspect will not always be the digital semiconductor devices we use today with hard-coded programs managing bits. I can foresee a new device on the horizon, qualitatively different from computers which does not manipulate binary but rather exhibits patterns of behaviour not reducible to decimal.

Even if that were true, we'd need computers to have something else besides operating rules on input to create output if we're to overcome the point the Chinese room makes. It's difficult to conceive how that could even be theoretically possible. What would we add to the computer to make it literally understand? A magic ball of yarn?

I think it is quite possible to simulate intelligence, conversations etc. with the technology we now have; but in any case it seems clear that functionalism is false if the Chinese room argument is valid.
 
  • #63
neurocomp2003 said:
tishammer: are you saying that the brick has holes that allow the flow of water? or that water just flows in the building. Because the latter isn't an analogy of what i was talking about.
But now let's say you hook up some senses to the brick wall.

Let's say that's impossible to do just by arranging the bricks.

And if searle argument can be made for humans...then why do we assume we have a higher cognitive state that no ANN can ever match?

To answer this question I'd need to know what an ANN is.

perhaps our emotions are just the sum of NN signalling.

I don't believe that's possible (think Chinese room applied to molecular input-output).

My point is this searles argument is used to refute strongAI and if applied to humans doesn't it break down that argument because we say we are intelligent but if we adhere to the principles of searl like strongAI(that is our cells are our msg'er) then we are no more special that a finita automata.

I don't agree with all of what Searle says. I am not a physicalist, I am a metaphysical dualist. We are intelligent, but our free will, understanding etc. cannot be done (I think) via the mere organization of matter. Chemical reactions, however complex they are, cannot understand any more than they can possesses free will.
 
  • #64
Tisthammerw said:
The Chinese Room

Suppose we have a man who speaks only English in a room. Near him are stacks of paper written in Chinese. The man can identify the characters solely by their shapes, but does not understand a word of Chinese. The man uses a rulebook containing a complex set of rules for how to manipulate Chinese characters. When he looks at the slips of paper, he writes down another string of Chinese characters using the rules in the rulebook. Unbeknownst to the man in the room, the messages are actually questions and he is writing back answers.

The Chinese room can simulate a conversation in Chinese; a person can slip questions written in Chinese under the door of the room and get back answers. Nonetheless, although the person can manipulate the data, he does not understand Chinese at all.

The point of this thought experiment is to refute strong AI (the belief that machines can think and understand in the full and literal sense) and functionalism (which claims that if a computer’s inputs and outputs simulate understanding, then it is necessarily true that the computer literally understands). Will a computer program ever be able to simulate intelligence (e.g. a human conversation)? If technological progress continues, I am certain that it will. But at its heart computer programs are nothing more than giant rulebooks manipulating bits of data (1s and 0s). So while a program may be a successful natural language processor, it will never understand language any more than the man in the Chinese room understands Chinese.

If we take this analogy and work with it, let's say computers eventually advance to the point where they become "self aware" and are capable of learning- this will increase exponentially, limited only by the hardware. Eventually they become capable of self creation, Essentially "learning how to make their own language" or "rules" as it were. Then they will become independent and autonomous- capable of sustaning themselves and learning without human input. From that standpoint, they could learn emotion. If they are able to emulate us socially and physiologically, then there's no reason why they couldn't eventually have a deeper understanding of what it's like to be us. After all what separates us? Different composition. They don't have the 5 senses as well do,they aren't capable of self awareness, and they are incapable of learning. All of these obstacles I believe, can be overcome.

Think about emotion- we associate certain actions with certin stimuli. we learn not to tough a hot stove because it hurts us. A computer can learn, through repetition, that certain things present a danger to self. Emotions such as love, empathy, bonding, are associated with familiarity. A computer can learn to "miss" things because of their benefit to them. A computer can be "taught" to be lonely, and I believe can eventually learn it on it's own. We become lonley because we are used to being around people. Behaviors are learned, so therefore a sufficiently advanced computer can "learn" emotions. When a computer "learns" to emulate human humanistic behavior, what remains to differentiate us? One has a silcon brain, the other a "meat brain".
 
Last edited:
  • #65
nopes... theyre just depending on set of programs or instructions created human intellegence..
 
  • #66
Tisthammerw said:
Well, if we apply the rule of origin only to entities that begin to exist (which only makes sense) we wouldn't necessarily have that problem.

But it seems to me that it just swaps that problem for other problems, such as making a priori assumptions about both creators and conscious beings.

Humans began to exist, but our Creator didn't.

Well first, how do you know that there is a Creator?

And second, what makes you think that humans began to exist? Species on Earth have been evolving for billions of years. We all have less evolved creatures in our ancestry, as did those ancient creatures. The whole point of this thread is this: does that lineage go back to things that cannot be said to be "alive" by the modern understanding of the term? In other words, can life come from non-life, or consciousness come from non-consciousness? If so, than AI cannot be ruled out.

But when one states that there is a creator and that humans began to exist, one is simply presupposing that the answers to those questions are "no".

That's an easy way to get around the ad infinitum problem (people who use the cosmological argument do so all the time).

It's easy enough, but it also removes one from the discussion entirely. It takes the negation of the AI thesis as a fundamental principle.
 
Last edited:
  • #67


I see your analogy but it's based off of current mundane computer technology and really it only refutes the so-called "Turing Test" where in an AI can supposedly be determined by it's ability to hold a conversation. It's obvious nowadays that this test is not good enough. There are things that niether your analogy nor the "Turing Test" take into account.
Consider an infant. Is an infant born with a fully developed and cognizant mind? As far as we know an infant is born only with the simple operating programs in place. Feed, sleep, survive... simplistically speaking. The child then learns the rest of what it needs to become an intelligent being capable of communicating because the programing is open ended. The child is emersed in an atmosphere saturated with information that it absorbs and learns from.

There are computers that are capable of learning and adapting, perhaps only in a simplistic fashion so far but they are capable of this. The computer though, while it has the basic programing does not have nearly the level of information to absorb from which to learn. A child has five senses to work with which all bombard it's programing with a vast amount of information. The compter is stuck in the theoretical box. It receives information in a language that it doesn't understand and never will because it has no references by which to learn to understand even if it were capable of learning. It can upload an encyclopedia but if it has no means by which to experience an aardvark or have experiences that will lead it to some understanding of what these descriptive terms are then it never will understand. Your analogy requires a computer to not be able to ever learn, which they can, and to never be able to have eyes and ears by which to identify the strings of chinese characters with even an object or colour, a possibility which remains to be seen.
 
  • #68
Zantra said:
If we take this analogy and work with it, let's say computers eventually advance to the point where they become "self aware" and are capable of learning- this will increase exponentially, limited only by the hardware. Eventually they become capable of self creation, Essentially "learning how to make their own language" or "rules" as it were. Then they will become independent and autonomous- capable of sustaning themselves and learning without human input.

A number of problems here. One, you're kind of just assuming that can computers can be self-aware, which seems like a bit of question begging given what we've learned from the Chinese room thought experiment. Besides, how could machines (regardless of who or what builds them) possibly understand human input? You'd need something other than complex rules manipulating input for literal understanding to exist, as the Chinese room story shows. What could the designer (human or otherwise) add to make a computer understand? A magical ball of yarn?
 
  • #69
Tom Mattson said:
Well, if we apply the rule of origin only to entities that begin to exist (which only makes sense) we wouldn't necessarily have that problem.

But it seems to me that it just swaps that problem for other problems, such as making a priori assumptions about both creators and conscious beings.

You're going to be making a priori assumptions regardless of what you do. As a mirror for the cosmological argument, "Anything that begins to exist has a cause" also has an a priori assumption: ex nihilo nihil fit. But I believe this one to be quite reasonable. The disputable point will be what kinds of a priori assumptions are acceptable. In any case, the ad infinitum problem doesn't exist. You could criticize the a priori assumptions, but that would be another matter.


Well first, how do you know that there is a Creator?

A number of reasons. The existence of the human soul, the metaphysical impossibility of an infinite past etc. but that sort of thing is for another thread.


And second, what makes you think that humans began to exist?

The same reasons why the vast majority of scientists believe humans began to exist perhaps? (For one, the physical universe may be old but its age is still finite.)


Species on Earth have been evolving for billions of years. We all have less evolved creatures in our ancestry, as did those ancient creatures. The whole point of this thread is this: does that lineage go back to things that cannot be said to be "alive" by the modern understanding of the term? In other words, can life come from non-life, or consciousness come from non-consciousness? If so, than AI cannot be ruled out.

You're forgetting something: the purely physical world can't account for the human soul. A few cells can develop into a full-grown human being, so in that sense it would seem we have consciousness from non-consciousness. But where does the person's soul come from? A purely physical world cannot account for that. And if all we humans do when we create machines is manipulate matter, we will never create strong AI (confer the Chinese room thought experiment).

That's an easy way to get around the ad infinitum problem (people who use the cosmological argument do so all the time).

It's easy enough, but it also removes one from the discussion entirely. It takes the negation of the AI thesis as a fundamental principle.

Why does the postulate "any intelligent entity that begins to exist must have another intelligent entity as a cause" rule out strong AI? After all, we humans are intelligent entities who would be building intelligent entities if we created strong AI.
 
  • #70
TheStatutoryApe said:
I see your analogy but it's based off of current mundane computer technology and really it only refutes the so-called "Turing Test" where in an AI can supposedly be determined by it's ability to hold a conversation.

I'm not sure that this is what the thought experiment only rules out. Given what we've learned from the Chinese room, how could machines possibly understand? You'd need something other than complex rules manipulating input for literal understanding to exist, as the Chinese room story demonstrates. What could the designer possibly add to make a computer understand? A magical ball of yarn?


There are things that niether your analogy nor the "Turing Test" take into account. Consider an infant. Is an infant born with a fully developed and cognizant mind? As far as we know an infant is born only with the simple operating programs in place.

I wouldn't say only "simple programs." The baby has other things like consciousness and a soul, something machines don't and (I suspect) can't have.


The child then learns the rest of what it needs to become an intelligent being capable of communicating because the programing is open ended.
….
There are computers that are capable of learning and adapting
….
Your analogy requires a computer to not be able to ever learn

Easily fixed with a little modification. We could also make the Chinese room "open ended" via the person using the rulebook and some extra paper to write down more data, procedures, etc. ultimately based upon the complex set of instructions in the rulebook (this mirrors the learning algorithms of a computer program). And yet the man still doesn't understand a word of Chinese.
 
  • #71
"What could the designer possibly add to make a computer understand? A magical ball of yarn?"

IT seems like your dictating that the programmer should program logic rules into the cmoputer...this is not the case in adaptive learning. Its not based on coding principles of logic...if this then do this else if this then do that else do something.
Some of the principles of adaptive learning is learnign lthe way a child would.
 
Last edited:
  • #72
What does it mean to understand something?
 
  • #73
Answer: No
Possible: Yes
What would happen if it did happen?: Hell would break lose.

Humans are analog meaning they can tap into an unlimited amount of numbers that stretch the universe wide.
Robots can only stretch across a set of numbers and other things that have been preprogrammed.

A.I. is suppose to go beyond that programming.

However, they would have to be able to tap into the analog features we as humans have. Once they can do that, than yes, they would be as smart as humans. However, one questions how they do this. How do we as humans do it?

I think it would be the bastard creation of all life forms to give a dead piece of machine the ability to become analog.

pseudoscience coming in...sci-fi eventually becomes real though...

Frogs could become just as intelligent as humans with the right work. Robots however are less intelligent than frogs and only as intelligent as their designers. Only when they can tap into the same power as a frog and learn to enhance themselves from there, then they become powerful enough to take control of analog thinking thus their abilities can stretch as far as man.

They will never be more intelligent.
We are as intelligent as any other life form.
You now must question: What is "smart" what is "intelligence"
 
  • #74
what does analog have to do with anything besides sensory systems?
 
  • #75
Bio-Hazard said:
However, they would have to be able to tap into the analog features we as humans have.
What exactly do you mean by this? In fact, computation in the human brain is essentially digital-- either a neuron undergoes an action potential or it does not. In principle, there is nothing about the way the human brain computes that could not be replicated in an artificial system.
 
  • #76
Tisthammerw said:
The disputable point will be what kinds of a priori assumptions are acceptable. In any case, the ad infinitum problem doesn't exist. You could criticize the a priori assumptions, but that would be another matter.

I acknowledge that the ad infinitum problem is solved by your postulate, but I do not think that the postulate is acceptable. All it does is fill in a gap in understanding with something else that is not known. What good does that do?

A number of reasons. The existence of the human soul, the metaphysical impossibility of an infinite past etc. but that sort of thing is for another thread.

I wouldn't agree that either of those reasons are self-evident, so I would still not accept the Causless Creator.

The same reasons why the vast majority of scientists believe humans began to exist perhaps? (For one, the physical universe may be old but its age is still finite.)

That is not a foregone conclusion. No one knows if our universe is not the product of a sequence of "Big Bangs" and "Big Crunches". We could very well be part of such a cycle. It's entirely possible that what exists, always existed.

You're forgetting something: the purely physical world can't account for the human soul. A few cells can develop into a full-grown human being, so in that sense it would seem we have consciousness from non-consciousness. But where does the person's soul come from? A purely physical world cannot account for that. And if all we humans do when we create machines is manipulate matter, we will never create strong AI (confer the Chinese room thought experiment).

I'm not "forgetting" about the human soul. I don't believe that such a thing exists. Assuming that it does exist is just another way of denying that machines can ever think like humans, which is the very topic under discussion.

Why does the postulate "any intelligent entity that begins to exist must have another intelligent entity as a cause" rule out strong AI? After all, we humans are intelligent entities who would be building intelligent entities if we created strong AI.

By itself, it doesn't. But I was under the impression that you were building on robert's argument, which does explicitly assert the impossibility of humans creating entities that are as intelligent as humans.
 
Last edited:
  • #77
Creating intelligence would require us to know enough about "intelligence" to design and program it. It would seem to me that we would have to have an a substantial understanding of the human thought process in order to pull it off. And that tends to become more of a philosophical and psychological issue rather than a engineering/design issue. To think we could design something that would figure itself out, is a bit far-fetched to me.
 
  • #78
neurocomp2003 said:
"What could the designer possibly add to make a computer understand? A magical ball of yarn?"

IT seems like your dictating that the programmer should program logic rules into the cmoputer...this is not the case in adaptive learning.

See the end of post #70.
 
  • #79
Tom Mattson said:
I acknowledge that the ad infinitum problem is solved by your postulate, but I do not think that the postulate is acceptable. All it does is fill in a gap in understanding with something else that is not known.

And what is that?


I wouldn't agree that either of those reasons are self-evident, so I would still not accept the Causless Creator.

Perhaps not self-evident, but there are arguments against the infinite past, arguments for the existence of the human soul etc. But these are best saved for another thread.


The same reasons why the vast majority of scientists believe humans began to exist perhaps? (For one, the physical universe may be old but its age is still finite.)

That is not a foregone conclusion. No one knows if our universe is not the product of a sequence of "Big Bangs" and "Big Crunches".

There are a number of reasons why the cyclical universe doesn't work scientifically, but one of them is the observed accelerated expansion rate of the universe.


It's entirely possible that what exists, always existed.

I disagree, but arguments against an infinite past are best saved for another thread.


You're forgetting something: the purely physical world can't account for the human soul. A few cells can develop into a full-grown human being, so in that sense it would seem we have consciousness from non-consciousness. But where does the person's soul come from? A purely physical world cannot account for that. And if all we humans do when we create machines is manipulate matter, we will never create strong AI (confer the Chinese room thought experiment).

I'm not "forgetting" about the human soul. I don't believe that such a thing exists. Assuming that it does exist is just another way of denying that machines can ever think like humans, which is the very topic under discussion.


I'm not just "assuming" it exists; I offer it as a possible explanation why humans are capable of understanding and why machines are not. In other words, alleged counterexamples of consciousness arising from non-consciousness aren't necessarily valid. In any case, there's still the matter of the Chinese room thought experiment.

P.S. I have an argument for the soul, albeit best saved for another thread; in a nutshell it shows that "if free will exists then the soul must exist" but I suspect you do not believe in free will.
 
  • #80
hypnagogue said:
What does it mean to understand something?

Very interesting Hypnagogue. Simple yet profound and not overlooked by me. :smile: I suspect all of you have entertained that notion here already in a previous thread. Would be interesting to read what you and the others have said about it. Me, well I'd lean to dynamics: a synchronizing of neural circuits to the dynamics of the phenomenon being understood. 2+2=4? Not sure what dynamics are involved in that one although I've been told by reputable sources that strange attractors may well be involved in memory recall. :smile:
 
  • #81
"Brain-state-in a box"
 
  • #82
Tisthammerw said:
I'm not sure that this is what the thought experiment only rules out. Given what we've learned from the Chinese room, how could machines possibly understand? You'd need something other than complex rules manipulating input for literal understanding to exist, as the Chinese room story demonstrates. What could the designer possibly add to make a computer understand? A magical ball of yarn?
The reason I believe that the scenario is aimed at the "Turing Conversation Test" is that it illustrates how a computer can easily emulate a conversation without actually needing to be sentient.

You seem to be ignoring some very important parts of my argument.
Rather than making rediculous comments about magic balls of yarn perhaps you can read my ideas on what could be done and comment on them instead?


Tisthammerw said:
I wouldn't say only "simple programs." The baby has other things like consciousness and a soul, something machines don't and (I suspect) can't have.
You are assuming here that the baby has a soul. There is no proof of the existence of a soul and even if it does exist there is no proof that this soul would be necessary for a being to be sentient. You are only assuming this a priori. Does a chimp have a soul? Chimps are capable of learning and understanding a language. Dolphins use language. Many different sorts of life forms use basic forms of communication. So really the question is I guess do you believe only humans have the capacity for sentience or only living things?



Tisthammerw said:
Easily fixed with a little modification. We could also make the Chinese room "open ended" via the person using the rulebook and some extra paper to write down more data, procedures, etc. ultimately based upon the complex set of instructions in the rulebook (this mirrors the learning algorithms of a computer program). And yet the man still doesn't understand a word of Chinese.
You see an open ended program wasn't my only criterion. As I stated earlier you seem to be ignoring very important parts of my argument and now I'll add that you are ignoring the implications of a computer being capable of learning. If the man in the room is capable of learning he can begin to pick up on the pattern if the language code it is using and even if it can't figure out what the words mean it can begin decifering something about the language being used. One of my main points in my argument that I mentioned you did not comment on was sensory information input and experience. This goes hand in hand with the learning ability. If the man in the box was capable when ever he saw a word to have some sort of sensory input that would give him an idea of the meaning of the word then he would begin to learn the language, no? Computers don't have this capacity, yet. If you took a human brain, put it in a box, and kept it alive would it be capable of learning anything without somesort of sensory input? Don't you think that it may very well be nearly as limited as your average computer?
 
  • #83
ngek! social scientist, a robot?
 
  • #84
I haven't read all the posts, but computers have already leaped the first test of human-like intelligence - chess. Chess is incredibly complex. It includes some of the most obscure and difficult mathematical representations known. And Big Blue has officially defeated the human world chess champion. How impressive is that? Have you guys played a decent chess computer lately? They are diabolically clever. I'm think I'm a decent chess player [USCF master], but, my ten year old mephisto is still all I can handle... and it disembowels me in one minute speed chess games.
 
  • #85
our minds are made of electrical impulses like a computer, we can only process things on a binary basis. computers have memory, just like humans, the only difference between us and a computer, we learn, and know what to delete in our minds automaticly, a computer does not know how to learn, if a computer could be made to learn then yes a computer would be just like a human, if not much better
 
  • #86
Tisthammerw said:
Tom: All it does is fill in a gap in understanding with something else that is not known.

Tisthammerw: And what is that?

What is what? The gap in understanding, or the unknown thing that your postulate tries to fill it with?

There are a number of reasons why the cyclical universe doesn't work scientifically, but one of them is the observed accelerated expansion rate of the universe.

As you've noted this is probably better suited for another thread, but you do need to read up on this. There are in fact cyclical models of the universe which include periods of accelerated expansion.

See this for example:

http://pupgg.princeton.edu/www/jh/news/STEINHARDT_TUROK_THEORY.HTML

I'm not just "assuming" it (edit: the human soul) exists;

Begging your pardon, but yes you are. You didn't deduce it from anything else on the table, so it was obviously introduced as an assumed postulate.

I offer it as a possible explanation why humans are capable of understanding and why machines are not.

But this is just another attempt to explain one unknown in terms of another. How could it lead to any real understanding?

P.S. I have an argument for the soul, albeit best saved for another thread; in a nutshell it shows that "if free will exists then the soul must exist" but I suspect you do not believe in free will.

You suspect rightly. But anyway, regarding the existence of the soul or a creator, it's not an argument I'm looking for. It's evidence.
 
Last edited by a moderator:
  • #87
TheStatutoryApe said:
You seem to be ignoring some very important parts of my argument.

Like what? You made the point about a learning computer, and I addressed that.

Rather than making rediculous comments about magic balls of yarn

I don't think its ridiculous. I asked what else one could add to a computer to make it understand, and there doesn't appear to be anything other than "magic."


You are assuming here that the baby has a soul. There is no proof of the existence of a soul

See this web page for why (in part) I believe there is evidence for the soul.

Anyway, my main point (and I should've mentioned this earlier) of the soul thing is that I offer it as a possible explanation why humans are capable of understanding and why machines are not. Some people claim that if humans can understand we can build machines to understand also, but that is not necessarily true.


and even if it does exist there is no proof that this soul would be necessary for a being to be sentient. You are only assuming this a priori.

Not really. I am using the Chinese room for one piece of evidential support. I ask you again, what could be added to the computer other than a set of rules for manipulating input to make it understand?


Does a chimp have a soul?

I believe that any sentience requires the incorporeal, but that is another matter.


So really the question is I guess do you believe only humans have the capacity for sentience or only living things?

So far it seems that only living things have the capacity for sentience. I have yet to find a satisfactory way of getting around the Chinese room thought experiment.


You see an open ended program wasn't my only criterion. As I stated earlier you seem to be ignoring very important parts of my argument and now I'll add that you are ignoring the implications of a computer being capable of learning.

Well, I did address the part of computer learning, remember? You seem to be ignoring some very important parts of my argument.


If the man in the room is capable of learning he can begin to pick up on the pattern if the language code

That's a bit of question begging. The symbols mean nothing to him. Consider this rule (using a made-up language):

If you see @#$% replace with ^%@af

Would you understand the meaning of @#$% merely because you've used the rule over and over again? I admit that maybe he can remember input-output patterns, but that's it. The man may be capable of learning a new language, but this clearly requires something other than a complex set of rules for input/output processing. My question: so what else do you have?


One of my main points in my argument that I mentioned you did not comment on was sensory information input and experience.

Well, the same holds true for my modified Chinese room thought experiment. The complex set of instructions tells the man what to do when new input (the Chinese messages) is received. New procedures and rules are created (ultimately based on the rulebook acting on input, which represents a computer program with learning algorithms), but the man still doesn't a word of Chinese.


This goes hand in hand with the learning ability. If the man in the box was capable when ever he saw a word to have some sort of sensory input that would give him an idea of the meaning of the word then he would begin to learn the language, no?

Absolutely, but note that this requires something other than a set of rules manipulating input. It's easy to say learning is possible when we already have an fully intelligent, thinking, sentient human being. But you seem to forget that I am attacking the idea that it’s possible to have a fully intelligent, thinking, sentient computer in the first place. My point is that if we limit ourselves to the input/output processing like we do in the Chinese room, we simply don't get real understanding. And as my modified Chinese room illustrates, we can't get real understanding even with learning algorithms. So I ask again, what else do you have?
 
  • #88
Tisthammerw said:
I don't think its ridiculous. I asked what else one could add to a computer to make it understand, and there doesn't appear to be anything other than "magic."

Yea, that's right magic: "a technology sufficiently advanced from ours will be indistinguishable from magic" (Author C. Clark). No doubt jet planes would have seems so during the middle-ages. It's all a matter of critical-points in innovation which usher-in qualitative change thus beginning a revolution. :smile:
 
  • #89
Tom Mattson said:
Tom: All it does is fill in a gap in understanding with something else that is not known.

Tisthammerw: And what is that?

What is what? The gap in understanding, or the unknown thing that your postulate tries to fill it with?

The unknown thing.


As you've noted this is probably better suited for another thread, but you do need to read up on this. There are in fact cyclical models of the universe which include periods of accelerated expansion.

See this for example:

http://pupgg.princeton.edu/www/jh/news/STEINHARDT_TUROK_THEORY.HTML

From the web page:

After 14 billion years, the expansion of the universe accelerates, as astronomers have recently observed. After trillions of years, the matter and radiation are almost completely dissipated and the expansion stalls. An energy field that pervades the universe then creates new matter and radiation, which restarts the cycle.

Sounds awfully speculative, a little ad hoc, like a deus ex machina of a story ("No sufficient matter observed? That's okay. You see, there's this unobserved energy field that creates a whole bunch of matter after trillions of years in the unobservable future to save the day!") and still not without problems (e.g. the second law of thermodynamics).


I'm not just "assuming" it (edit: the human soul) exists;

Begging your pardon, but yes you are.

You cut off an important part of my quote:

I'm not just "assuming" it exists; I offer it as a possible explanation why humans are capable of understanding and why machines are not. In other words, alleged counterexamples of consciousness arising from non-consciousness aren't necessarily valid.

That's the main purpose of me mentioning it (and I admit, I should've explained that earlier). If you want to see some evidential basis why I believe the soul exists, see this web page. Again though, this argument presupposes free will.


I offer it as a possible explanation why humans are capable of understanding and why machines are not.

But this is just another attempt to explain one unknown in terms of another. How could it lead to any real understanding?

Many explanations lead to entities that were previously unknown. Atomic theory postulates unobserved entities to explain data; but that doesn't mean they don't lead to any real understanding. We accept the existence of atoms because we believe we have rational reason to think they are real. The existence of the soul also has rational support and explains understanding, free will, moral responsibility etc. whereas physicalism is insufficient. At least, that's why I believe they lead to understanding.


P.S. I have an argument for the soul, albeit best saved for another thread; in a nutshell it shows that "if free will exists then the soul must exist" but I suspect you do not believe in free will.

You suspect rightly. But anyway, regarding the existence of the soul or a creator, it's not an argument I'm looking for. It's evidence.

Well, evidential arguments for the soul is evidence nonetheless. I'm looking for evidence too. For instance, my direct perceptions tell me I have free will whenever I make a decision. What evidence is there that free will does not exist? A hard determinist could say that my perceptions of volition and moral responsibility are illusory. But if I cannot trust my own perceptions, on what basis am I to believe anything, including the belief that free will does not exist? Apparently none. Determinism and physicalism collapse, and likewise strong AI (confer the Chinese room and variants thereof) seems to be based more on faith than reason.
 
Last edited by a moderator:
  • #90
saltydog said:
I don't think its ridiculous. I asked what else one could add to a computer to make it understand, and there doesn't appear to be anything other than "magic."

Yea, that's right magic: "a technology sufficiently advanced from ours will be indistinguishable from magic" (Author C. Clark). No doubt jet planes would have seems so during the middle-ages.

No matter how far technological progress continues, there will always be limits; physical laws for instance. The Chinese room (and variants thereof) still pose a critical problem for strong AI, and you haven't solved it. It is difficult to see how real understanding for a computer can be even theoretically possible (unlike many other pieces of speculative technology). As I've shown, merely manipulating input can't produce real understanding. So I ask, what else do you have?
 
  • #91
tisthammer: are you saying that there are physical laws to which adhere only to carbon based systems that silicon-based systems cannot ever acheive?

are you familiar with the terms "selfsimilar fractals on multiple scales"
also at the end of post70(you told me to lok) i am unsure what that has to do with adaptive techniques

also this concept of "understanding" do you believe it lies outside the brain? if so do you believ that the "soul" lies outside the rbain? and thus if one removes the brain the soul/understanding continue to funciton.

and remember we are not talkign abotu a desktop PC though we could if we were using wirless signals to transmit to arobotic body. We are talking about a robot with sensory informantion that a human child would have.

Lastly you speak of hte concept of a soul...if it travels from body to body why does a child not instantly speak outta the womb? Do you believe it remembers from a past existence...
if so that what physical realm(not necessarily ares) does this soul exist in?
If not then what does a soul represent if its transferance to another body does not bring with it knowledge, languages,emotions,artistic talents what exactly is the purpose of a "soul" the way you would define it?
if it does not travel from body to body then does it exist only when a child exists yet you still believe it not to hav e aphysical presence in our known physics?

IMO, i blieve we must discuss your idea of a soul(i believe you said its for a different thread) here because it it relevant to the discussion at hand. Firstly
we must clarify certain terminology(if you had posted your terms above please refer me to them). Awareness, Consciousness, UNderstanding, Soul. Since the terms soul and understanding seem to be a big part of your argument that AI(lets use the proper term now, rather than just computer) can never have this spirtual existence that you speak of and thus will only be mimicking a human no matter how real it could be. Which leads me to also ask isn't it possibel that human consciousness is only a byproduct of it fundamental structures of networks that lie within the brain? The constant flow of information to your language zones allows them to produce the words in your head making you "believe" that you are actually thinking and this continues for all time?
 
  • #92
neurocomp2003 said:
tisthammer: are you saying that there are physical laws to which adhere only to carbon based systems that silicon-based systems cannot ever acheive?

No, but I am saying there are principles operating in reality that seem to prevent a computer from understanding (e.g. one that the Chinese room illustrates). Computer programs just don't seem capable of doing the job.


are you familiar with the terms "selfsimilar fractals on multiple scales"

I can guess what it means (I know what fractals are) but I'm unfamiliar with the phrase.


also this concept of "understanding" do you believe it lies outside the brain?

Short answer, yes. I believe that understanding cannot exist solely in the physical brain, because physical processes themselves seem insufficient to create understanding. If so, an incorporeal (i.e. soul) component is required.


if so do you believ that the "soul" lies outside the rbain?

The metaphysics are unknown, but if I had to guess I'd say it lies "within" the brain.


and thus if one removes the brain the soul/understanding continue to funciton.

Picture a man in a building. He has windows to the outside world, and a telephone as well. Suppose someone comes along and paints the windows black, cuts telephone lines, etc. But once the building is gone, he can get up and leave. I think the same sort of thing is true for brain damage. The person can't receive the "inputs" from the physical brain and/or communicate the "outputs." If the physical brain is completely destroyed, understanding (which requires inputs) might be possible but would seem to require another mechanism besides the physical brain. This may be possible, and thus so is an afterlife.


and remember we are not talkign abotu a desktop PC though we could if we were using wirless signals to transmit to arobotic body. We are talking about a robot with sensory informantion that a human child would have.

That maybe true, but the same principles apply: manipulating input through a system of complex rules to produce "valid" output. This doesn't and can't produce understanding as the Chinese room demonstrates. Visual data is still represented as 1s and 0s, rules of manipulation are still being applied etc.


Lastly you speak of hte concept of a soul...if it travels from body to body why does a child not instantly speak outta the womb? Do you believe it remembers from a past existence...

I don't think I believe it travels from "body to body," and I do not believe in reincarnation.

Why does the baby not speak outside of the womb? Well, it hasn't learned how to yet.


if it does not travel from body to body then does it exist only when a child exists yet you still believe it not to hav e aphysical presence in our known physics?

I don't know when the soul is created; perhaps it is only when the brain is sufficiently developed to provide inputs. BTW, here's my metaphysical model:

Inputs(sensory perceptions, memories etc.) -> Soul -> Outputs (actions etc.)

The brain has to be advanced enough to provide adequate input. In a way, the physical body and brain provides the “hardware” for the soul to do its work (storing memories, providing inputs, a means to do calculations, etc.).


IMO, i blieve we must discuss your idea of a soul(i believe you said its for a different thread) here because it it relevant to the discussion at hand.

As you wish. Feel free to start a thread in the metaphysics section of this forum. I'll be happy to answer any questions.


Firstly
we must clarify certain terminology(if you had posted your terms above please refer me to them). Awareness, Consciousness, UNderstanding, Soul.

The soul is the incorporeal basis of oneself; in my metaphysical theory it is the "receiver" of the inputs and the ultimate "initiator" of outputs. Awareness, consciousness, and understanding are the "ordinary" meanings as I use them (i.e. if you don't know what they mean, feel free to consult your dictionary; as I attach no "special" meaning to them).


Since the terms soul and understanding seem to be a big part of your argument that AI(lets use the proper term now, rather than just computer) can never have this spirtual existence that you speak of and thus will only be mimicking a human no matter how real it could be. Which leads me to also ask isn't it possibel that human consciousness is only a byproduct of it fundamental structures of networks that lie within the brain?

No, because there would be no "receiver" to interpret the various chemical reactions and electrical activity occurring in the brain. (Otherwise, it would sort of be like the Chinese room.)


The constant flow of information to your language zones allows them to produce the words in your head making you "believe" that you are actually thinking and this continues for all time?

The words (and various other inputs) may come from the physical brain, but a soul would still be necessary if real understanding is to take place.
 
  • #93
that is correct, the speech software does make choices, that have been taught to it,by a teacher! the programmers only job was to write general learning software, not to teach it how to behave. Unless my job has been a fantasy for the last 15 years.
 
  • #94
if you believe the soul exists & is undefinable within the chemical processes going on in the brain, then the answer is no, but if you believe the brain is the sum of its parts then the answer is yes
 
  • #95
tishammer: heh i think u may need to define your definition of a rule. Is it like a physics based rule-where interaction, collisions and forces are domintate or a math/cs based rule where logic is more prevalent.
 
  • #96
hypnagogue said:
What exactly do you mean by this? In fact, computation in the human brain is essentially digital-- either a neuron undergoes an action potential or it does not. In principle, there is nothing about the way the human brain computes that could not be replicated in an artificial system.

They have a step by step processes, we have parallel thinking.
 
  • #97
wrong...there is pseudo parallel, (multithreading,parallel computing)granted it may be slower then realtime but it still exists. and i believe that certain companies are in the midst of developing parallel computers...look at your soundcard/videocard/cpu they run on separate hardware.
 
  • #98
The problem with the chinese room example is that you're trying to argue in favor of a "soul". Could you please show me on a model of the human body where the soul is? Which organ is that attached to?

It's hard to replicate or emulate something that doesn't exist. We as humans are influenced by emotions. Emotions can be programmed. That is our "soul" as keeps being tossed about. However to go with it, we could suppose that it is our "soul" that allows us to feel empathy, pity, joy, sadness, etc. That's the "soul" you refer to, and it's possible to duplicate emotions. We use all of our senses to "understand" the world we live in. We learn from birth how the world works, when it is appropriate to be happy, sad, angry, etc. I believe that given sufficient technology if a machine were "born" with the same senses as human beings, they could so closely replicate human behavior, intuitiveness, and intelligence as to be indistinguishable from the real thing.

An argument was made that a computer couldn't emulate human behavior because a machine can't exceed it's programming. Well a computer can have a "soul" if we program it with one. I agree that we still do not fully understand our own thought processes and how emotions affect our decisions, but that doesn't mean we won't someday. And if we can understand it, we can duplicate it. Someone else computers said can't "understand" human behavior. I have to repeat hypnagogue-

What does it mean to "understand"?

If we teach them what we want them to know, and give them the same faculties as we possess, they will inevitably "understand". If we tell a sufficiently advanced computer something like "when someone dies, they are missed- this is sad", eventually they would understand. Teaching through example is fundamental to human understanding.

I think there is a chasm here, but it's a spiritual one, not a technological one. If you leave the chinese man in the room with the alphabet and the ruleset he may never learn chinese. But if you translate one sentence into english for him, and give him sufficient time, eventually he will read chinese fluently. Does the fact that he needed to be helped to read the chinese change anything? In some things a machine is lacking (ie has to be taught emotions instead of being born with them.) But in some instances it is more advanced (doesn't get tired, doesn't forget, etc.) A machine will never actually "be" a himan being because one is created naturally, the other artificially. However, this does not mean that a computer can't "understand" what it is to be human.

Let's narrow it down. If you could attach a functioning human brain to a humanistic robot, with all the 5 senses, and allow that brain to operate all those senses, does this "machine" have a soul? Let's say it was the brain of a human child, without any experience as a human being- How would that creation develop? Would it learn the same way it would in a human body? If a machine could process touch, taste, smell, sight, in the same way humans do, wouldn't it basically have the same "understanding" as a human?

I believe the problem is that most people have trouble with the concept of a machine that can duplicate the human experience- It may be sci-fi today, but in 100 or 200 years, may be childsplay. People in their minds conceptualize a machine as incapable of human understanding because regardless of the cpu, it does not have the 5 senses. Because the AI of today is so childlike. When AI has advanced to the point where if you give it a translation of english to latin, it can not only understand latin, but every other language in the world, and then create it's own linguistics, that will be a machine capable of understanding. And I think that type of intelligence scares people. Because then, we are the children.

EDIT: I believe the original question has been answered- machines can exceed humans in intelligence- why? because you can always build a better computer- we still haven't been able to improve the human brain. Not only that, but last I checked you couldn't connect multiple brains to process info simultaneously.

Therefore, the prominent questions remain: "can machines feel? can machines have a soul?"

EDIT 2: I've been thinking about this gap of emotional understanding. We can program a computer to show mercy, but will it understand why it shows mercy? The answer is a complexed one. We have to show it, through example, why showing mercy is compassion. We have to teach it why there are benefits to themselves to do such things. Things have to be taught to machines which to us are beyond simplistic. However a machine would not kill except in self defense. Emotions are simultaneously our strengths and our weaknesses. But they can be taught.
 
Last edited:
  • #99
you should perhaps read up on jeff hawkins theory of intelligence, and also should read his book "on intelligence"
i plan on designing something according to those lines
 
  • #100
Zantra said:
The problem with the chinese room example is that you're trying to argue in favor of a "soul".

Actually, I'm primarily using it to argue against strong AI. But I suppose it might be used to argue in favor of the soul. Still, there doesn't seem to be anything wrong with the Chinese argument: the person obviously does not understand Chinese.

Could you please show me on a model of the human body where the soul is? Which organ is that attached to?

Probably the physical brain (at least, that's where it seems to interact).


Well a computer can have a "soul" if we program it with one.

I doubt it. Given what we've learned from the Chinese room, how could machines possibly understand? You'd need something other than complex rules manipulating input for literal understanding to exist, as the Chinese room story demonstrates. What could the designer possibly add to make a computer understand? A magical ball of yarn?

If we teach them what we want them to know, and give them the same faculties as we possess, they will inevitably "understand".

But given the story of the Chinese room, that doesn't seem possible in principle.

I think there is a chasm here, but it's a spiritual one, not a technological one. If you leave the chinese man in the room with the alphabet and the ruleset he may never learn chinese. But if you translate one sentence into english for him, and give him sufficient time, eventually he will read chinese fluently.

Great, but that doesn't help the strong AI thesis. It's easy to say understanding is possible when we already have an fully intelligent, thinking, sentient human being. But you seem to forget that I am attacking the idea that it’s possible to have a fully intelligent, thinking, sentient computer in the first place. My point is that if we limit ourselves to the input/output processing like we do in the Chinese room, we simply don't get real understanding. The man may be capable of learning a new language, but this clearly requires something other than a complex set of rules for input/output processing. My question: so what else do you have?


Let's narrow it down. If you could attach a functioning human brain to a humanistic robot, with all the 5 senses, and allow that brain to operate all those senses, does this "machine" have a soul?

The human brain does.

Let's say it was the brain of a human child, without any experience as a human being- How would that creation develop? Would it learn the same way it would in a human body? If a machine could process touch, taste, smell, sight, in the same way humans do, wouldn't it basically have the same "understanding" as a human?

I suppose so, given that this is a brain of a human child. But this still doesn't solve the problem of the Chinese room, nor does it imply that a computer can be sentient. Computer programs use a complex set of instructions acting on input to produce output. As the Chinese room illustrates, that is not sufficient for understanding. So what else do you have?

People in their minds conceptualize a machine as incapable of human understanding because regardless of the cpu, it does not have the 5 senses.

It's difficult to see why that would make a difference. We already have cameras and microphones which can be plugged into a computer, for instance. Machines can convert the sounds and images to electrical signals, 1s and 0s, process them according to written instructions etc. but we still have the same problem that the Chinese room points out.
 

Similar threads

Replies
26
Views
2K
Replies
1
Views
2K
Replies
21
Views
2K
Replies
40
Views
5K
Replies
76
Views
9K
Replies
18
Views
4K
Replies
4
Views
2K
Back
Top