Not Artificial

  • Thread starter Mentat
  • Start date

Is the intelligence, of a man-made computer, artificial?

  • No, I agree with Mentat

    Votes: 4 26.7%
  • No, but for different reasons than Mentat's

    Votes: 6 40.0%
  • Yes, because...

    Votes: 5 33.3%

  • Total voters
    15
476
0
Originally posted by Mentat
Yes, everything is natural, because everthing exists within "Nature".
I have trouble understanding you. Do you want mankind to never use word artificial again? Why are words created? To distinguish something, not in infinite sense, but in relative sense. Its very obvious that term artificial is very relative. Whats good in reasoning that because everything is within nature, when you create something with your hands, its completely naturally appearing thing? Would you like to keep saying that computers naturally appeared on this planet, like another specie? Yes, you could, but WHY?

Why do we have concept of charge? Everything is neutral, because everything exists within nature? I think there is even term for such wordplay. What do you imply by 'everything'? All individual entities *together* taken, or any individual entity separately?

You are misusing the word "artificial" here. Perhaps you should look it up in a dictionary. No offense implied, I just think you should reconcile your reasoning with the actual meaning(s) of the word "artificial".
hmm. I don't see it. How exactly did I misuse it? Of course I checked dict before posting. As any word in english, this has several meanings. Besides, ppl assign additional meanings.

You restrict artificial to only manmade. I dislike such human-centric view, and expand it to monkey-made, ant-made, etc. Then, its natural to expand this to also non-biological relations, and you notice pattern - closed system as a whole has no concept of artificial. Only relations between systems have meaning to 'artificial'. So, 'artifical' has sense only 'in relation to' specified entity. Computers are artifical to nature if compared to nature without man. For nature with man, they form closed system.

Can intelligence really be artificial?
This is abit deeper question than it seems. You need to ask what is Intelligence in the first place. Not just stick with dictionary, but consider deeper meaning of it. Is mirror intelligent? Is river flowing in its riverbed intelligent? Is reflex reactions sign of intelligence? Is computer executing in its preprogrammed 'riverbed' intelligent?

At some point, there is a split between intelligence arisen from repeating of already known program, and intelligence that has capacity to create new knowledge from inadequate input data. Its called forced induction afaik. Decisions made based on adequate input data, like in case of past experience, is called free induction iirc. Its the main difference between animals and man. When animals confront situation not covered by their past experience, they have only one reaction in their program - fear.

So, induction based on solid facts is one side of the coin, and induction based on very shaky and uncertain ground is other side. Its very likely Nth degree of speculation and is highly likely false. Its the quality of such induction that makes humans special. Humans are equipped with brain that has capacity to change its decisions depending on change in quality of partial facts. Thus, although initially empty and very likely to fail, over the course of life all of the partial facts help to make successful induction. Its a system that evolves internally, and although all people are equipped equally, they develop very different levels of intelligence. Like same holographic plate can hold images with differing levels of details, dependant on parts of it.

Now, it may be easier to see that to copy knowledge of facts and their relations is only small part of intelligence. This can be easily done with computers. But to make computer intelligent on similar scale as humans, one needs first to equip them with ability to evolve, and then let them evolve over time. Even though we still don't know how brain is able to do that, if we suppose that we can copy that, its not enough to make intelligence. Even if we copy all partial facts of a given human, we replicate specific entity. But it has then capacity to evolve further than with what we equipped it with, and that part of it can't be really artificial. Such intelligence becomes genuine shortly even after being released from factory. Selfinteraction of a closed system with only partial inputs from outside leads to a whole that is much larger than the sum of its parts.
You can transfer (partial) facts, but you can't transfer understanding, its something inherent to an entity, and must happen inside.

Whether manmade computer with ability to do forced induction and that has developed IQ xxx over course of hundred years is artificial intelligence, in my view depends on difference between amount of intelligence transferred to it and amount it was able to develop on its own. If the difference is nil, its artificial, like manmade, if diff is large, its genuine. And I don't consider amount of factual data as intelligence, so knowledge != intelligence. Its not the knowledge that counts, but what you DO with it.
 
1,476
0
I voted yes it is artificial intellegence because of your included definition of artificial - not natural. I dispute that there is any real artificial intelligence about that of a flat worm or at best and ant. Whether or not there is it is still manufactured and not grown and therefore (there's that word again) is by your supplied definition artificial.
 
3,754
2
Originally posted by heusdens
It is of course true that "artificial" is an artificial concept itself, nature does not have the concept of "artificial", but humans do. Yet, to humans, it is a meaningfull concept, and distringuishes between things that exist within nature (including man itself) that did not in any way depend on the existence of humans, for modifications to the natural reality.
Houses do not come into existence because of Nature alone, but because humans build them, despite the fact that all materials and even the energy used when building a house, arrive from nature.
How is it that "humans build it" is different from "it came about naturally"? I understand that humans are very different from most other forms of nature, but that doesn't make them seperate from nature altogether.
 
3,754
2
Originally posted by wimms
I have trouble understanding you. Do you want mankind to never use word artificial again? Why are words created? To distinguish something, not in infinite sense, but in relative sense. Its very obvious that term artificial is very relative. Whats good in reasoning that because everything is within nature, when you create something with your hands, its completely naturally appearing thing? Would you like to keep saying that computers naturally appeared on this planet, like another specie? Yes, you could, but WHY?
"Naturally appeared"? This is the problem. No species just appeared out of nowhere, they were produced by natural causes. And, since humans are part of Nature, humans are a "natural cause". Thus, man-made computers are natural.

Why do we have concept of charge? Everything is neutral, because everything exists within nature? I think there is even term for such wordplay. What do you imply by 'everything'? All individual entities *together* taken, or any individual entity separately?
Both. All entities exist within the universe, don't they? If so, then all entities were produced naturally.

hmm. I don't see it. How exactly did I misuse it? Of course I checked dict before posting. As any word in english, this has several meanings. Besides, ppl assign additional meanings.
Well, you were using "artificial" to mean anything new. This is not so. "Artificial" means "crafted" (it can also mean "not genuine", which is, of course, the definition that I was refering to at the beginning of this thread).

You restrict artificial to only manmade. I dislike such human-centric view, and expand it to monkey-made, ant-made, etc.
No, I agree with that. This is just further proof that "artificial" things are really natural (as they occur in nature all of the time).

Then, its natural to expand this to also non-biological relations, and you notice pattern - closed system as a whole has no concept of artificial. Only relations between systems have meaning to 'artificial'. So, 'artifical' has sense only 'in relation to' specified entity. Computers are artifical to nature if compared to nature without man. For nature with man, they form closed system.
"Compared to nature without man"? If there were no humans, "artificial" intelligence wouldn't exist at all.

This is abit deeper question than it seems. You need to ask what is Intelligence in the first place. Not just stick with dictionary, but consider deeper meaning of it. Is mirror intelligent? Is river flowing in its riverbed intelligent? Is reflex reactions sign of intelligence? Is computer executing in its preprogrammed 'riverbed' intelligent?
I don't understand what "preprogrammed 'riverbed'" means. A computer doing what it was programmed to do is (of course) intelligent because what it was programmed to do is "be intelligent". I defined intelligence at the beginning of this thread, btw.

At some point, there is a split between intelligence arisen from repeating of already known program, and intelligence that has capacity to create new knowledge from inadequate input data.
But that capacity is "creativity", and (IMO) just requires more complex programming (like human programming).

Its the main difference between animals and man. When animals confront situation not covered by their past experience, they have only one reaction in their program - fear.
Humans are animals. Other animals' brains are not complex enough to respond creatively, but that doesn't make their intelligence "artificial" (meaning "not genuine"), does it?

Now, it may be easier to see that to copy knowledge of facts and their relations is only small part of intelligence. This can be easily done with computers. But to make computer intelligent on similar scale as humans, one needs first to equip them with ability to evolve, and then let them evolve over time.
Or force them to evolve beyond themsleves, as we have already done (from behemoth, primitive, chess-playing, computers to the common PC).
 
476
0
Originally posted by Mentat
"Naturally appeared"? .. humans are a "natural cause". Thus, man-made computers are natural. All entities exist within the universe, don't they? If so, then all entities were produced naturally.
If you didn't notice yet, then I'm not arguing this. Thats too basic to even think about objecting.
Can you for a moment take a pause from repeating that word artificial has no absolute meaning, and try to think of its relative meaning?

Well, you were using "artificial" to mean anything new. This is not so. "Artificial" means "crafted" (it can also mean "not genuine", which is, of course, the definition that I was refering to at the beginning of this thread).
Misunderstanding is on your part. I did NOT mean 'anything new'. Have you heard saying "not invented here"? If you accept something that I say for a fact and base your reasoning on it, then it falls to "not invented here" category, you copy knowledge, mimic, feign without any mental effort on your part. Then its artificial in regards to you, not because its new, but because its created by someone else and then imparted to you. It may be lie, it may be dangerous to your life, it may distress your mental stability. I'm crafting knowledge in you that you can either accept or resist. Its artificial knowledge in regards to you, alone.
But when you yell eureka, not because you've just discovered you're a boy, but because you've reached to valid and true conclusion based on awfully uncertain facts, then, even if its reinvented wheel, its genuine to you, its invented by you.
When you grow your own bone, its genuine, but if you'd be forced to get implanted substitute to bone, you'd call it artificial bone, not because its artificial to the universe, but only your body.

I don't understand what "preprogrammed 'riverbed'" means. A computer doing what it was programmed to do is (of course) intelligent because what it was programmed to do is "be intelligent".
Programmed riverbed is about computer's freedom - its program flow is as defined as riverflow. If you confront computer with undefined facts, partial truths, unaccounted situations, it will fail, deadlock, crash. It has no capacity to reason, understand, manipulate and form abstract concepts. You can't program computer to 'be intelligent', you can only program it to 'look intelligent', and thats precisely why its called AI. In distant future, we might be able to program it to 'have capacity' to develop intelligence, but hardly any more.

But that capacity is "creativity", and (IMO) just requires more complex programming (like human programming).
Bull's 'creativity' is bs. :wink: Humans call it art. Attaching label to it doesn't make it any simpler. You think its program thats more complex, but problem is that humans are uncapable to grasp even startingpoint for such program, not just writing it. Problem is in thinking, not doing it.
Have you ever programmed anything on computer? 'human programming', if thats what I think, is far far from computer programming, its training of existing program. To create as sophisticated program to be trained as that in human brain *may* stay unreachable to human intellect forever... Much like monkeys can stare at TV-set for ages without any chance to make one. Hopefully not, but its not 'just alittle more complex programming'.
You've been on about any thread that has word 'paradox' mentioned in it, now imagine writing a program that has to face em on about every corner.

Humans are animals. Other animals' brains are not complex enough to respond creatively, but that doesn't make their intelligence "artificial" (meaning "not genuine"), does it?
Why did you say that animals' intelligence is artificial, and then argue that?? Certainly I didn't give any reason. Did you get distinction between free induction and forced induction? They both are parts of intelligence, and latter one extends on former one. My point was that we could mimic animal intelligence, and it might be indistinguishable from real one, but we cannot mimic the other one, its not defined, we only could build computer that can develop it on its own. Key to understand here is that it would need to operate with partial truths, swamp of paradoxes, partially defined variables and include more than is physically fitting it. Then you can apply 'human programming' to it and make it either smart or idiot. But unpredictable computer in mental distress is hardly what we'd call a PC.

Or force them to evolve beyond themsleves, as we have already done (from behemoth, primitive, chess-playing, computers to the common PC).
PCs are same as they were in day 1 - as dumb as a mousetrap. They just increased in complexity, and shuffle more bits around in same amount of time. Quantity != IQ.

ps. I'm really not interested in arguing here. You asked for opinion, I gave mine. I'm not here for converting anyone.
 
3,754
2
Originally posted by wimms
If you didn't notice yet, then I'm not arguing this. Thats too basic to even think about objecting.
Can you for a moment take a pause from repeating that word artificial has no absolute meaning, and try to think of its relative meaning?
I was thinking of it's relative meaning, when I started out. I was thinking of it's meaning "not geniune" and how that didn't apply to man-made computer intelligence. It's not my fault that some of the members didn't get which definition of "artificial" I was talking about, but, since it turned into a discussion of a different kind of "artificial", I was explaining why that one doesn't apply either.

Misunderstanding is on your part. I did NOT mean 'anything new'. Have you heard saying "not invented here"? If you accept something that I say for a fact and base your reasoning on it, then it falls to "not invented here" category, you copy knowledge, mimic, feign without any mental effort on your part. Then its artificial in regards to you, not because its new, but because its created by someone else and then imparted to you.
So, because it was not my own invention, it is artificial?

It may be lie, it may be dangerous to your life, it may distress your mental stability. I'm crafting knowledge in you that you can either accept or resist. Its artificial knowledge in regards to you, alone.
Again, this is not what "artificial" means. It's "alien"/"foreign"/"external"/etc, but it's not artificial.

But when you yell eureka, not because you've just discovered you're a boy, but because you've reached to valid and true conclusion based on awfully uncertain facts, then, even if its reinvented wheel, its genuine to you, its invented by you.
"It's genuine to me"? Do you mean that I accept it's being genuine?

When you grow your own bone, its genuine, but if you'd be forced to get implanted substitute to bone, you'd call it artificial bone, not because its artificial to the universe, but only your body.
It's alien/foreign/new to my body, but it is a genuine replacement, and it was produced naturally.

Programmed riverbed is about computer's freedom - its program flow is as defined as riverflow. If you confront computer with undefined facts, partial truths, unaccounted situations, it will fail, deadlock, crash. It has no capacity to reason, understand, manipulate and form abstract concepts. You can't program computer to 'be intelligent', you can only program it to 'look intelligent', and thats precisely why its called AI.
This is not true at all. Simple example: Chess engines. A computer with a chess engine, can be confronted with a variation that was never programmed into it (as not all variations can be programmed into it, that's impossible), and it will respond.

Have you ever programmed anything on computer? 'human programming', if thats what I think, is far far from computer programming, its training of existing program.
Yes, "training of existing program". How's that different from what we do to man-made computers?

To create as sophisticated program to be trained as that in human brain *may* stay unreachable to human intellect forever... Much like monkeys can stare at TV-set for ages without any chance to make one.
Interesting point, however a monkey will never have the ambition/creativity/intelligence to even want to make one.

Hopefully not, but its not 'just alittle more complex programming'.
You've been on about any thread that has word 'paradox' mentioned in it, now imagine writing a program that has to face em on about every corner.
The computer within my skull has been faced with quite a few paradoxes, and I'm still alive. If I see that they can't be resolved, then I call their irresolvability a resolution. (Paradox applied to paradox, in case you didn't notice.)

Why did you say that animals' intelligence is artificial, and then argue that?? Certainly I didn't give any reason. Did you get distinction between free induction and forced induction? They both are parts of intelligence, and latter one extends on former one. My point was that we could mimic animal intelligence, and it might be indistinguishable from real one, but we cannot mimic the other one, its not defined, we only could build computer that can develop it on its own.
When humans have children, the children are endowed with a complex (genetic) program, which the children then expand on for the rest of their lives. How is this different from a man-made computer (setting aside the fact that a child's genetic programming is much more complex than a man-made computer's).

PCs are same as they were in day 1 - as dumb as a mousetrap. They just increased in complexity, and shuffle more bits around in same amount of time. Quantity != IQ.
What kind of sense does that make? If they are more complex, then they are not as "dumb" as they used to be.

ps. I'm really not interested in arguing here. You asked for opinion, I gave mine. I'm not here for converting anyone.
And your opinion happens to have enough merit for me to consider it worth debating. I hope this doesn't turn into an argument, but it is a debate, and there's nothing wrong with debates.
 
476
0
Originally posted by Mentat
So, because it was not my own invention, it is artificial?

Again, this is not what "artificial" means. It's "alien"/"foreign"/"external"/etc, but it's not artificial.

"It's genuine to me"? Do you mean that I accept it's being genuine?

It's alien/foreign/new to my body, but it is a genuine replacement, and it was produced naturally.
You still haven't decided, do you mean artificial in absolute sense or only relative. You mix them all the time. You just switched relative meaning to absolute, and you get that bone was produced naturally. Who cares? Alien/foreign does not describe that it mimics and replaces your genuine bone. That what the word artificial is used for.

If you take artificial in relative sense, then you have immediately problem, where to draw the line. Think about it. I'm not saying that I must be right, I'm merely saying that you 'could' look at it that way. Thats more than dictionary allows.

If your body grows a bone, 'do you mean that you accept it's being genuine'? If you invent something new, it IS genuine. Its just not new to everyone. If I install forcibly belief in you, its not just alien/foreign, if you accept it and start using in reasoning, you've been programmed, some thoughts cloned.

Its just that you can view relations between closed systems in terms of genuine vs artificial. Its not conventional, and maybe mad, but thats one way.

Take Nature as closed system, take human as standing out of it, and after awhile putting a robot into Nature. Its artificial, to whom? So, what is teachings of parents to a baby? Artificial experience. You don't like that usage of word? Don't use it. How do I draw the line? I told that earlier. When two closed systems interact, they change each other. To reshape other system, you need to first overcome its strength, destroy old order and impress the new one. That makes heat, entropy. But when one system forms internally new order from excess heat, its selfinteraction, it consumes heat. This is the distinction. And of course, afterall, everything in nature is .. natural.

This is not true at all. Simple example: Chess engines. A computer with a chess engine, can be confronted with a variation that was never programmed into it (as not all variations can be programmed into it, that's impossible), and it will respond.
You object invain. Its not variations, noone cares about them. Its conditions, methodologies, algoritms. Formulas, not values. Chess engines have been 100% defined. Its too simple game. For computer, all is defined when you describe rules of game and it has unambigous non-contradictionary perception of it. If other player makes false move, and it wasn't defined as false, computer would get lost. If rules of game were contradicitonary, it would be unable to resolve many situations. Imagine opponent making partial moves, 1/3 of one and 2/3 of other piece. If thats not defined as legal move, computer will stop/ignore. Its simple to define all chess rules, and comp will never fail. But it'll be amazingly dumb chessplayer. To go beyond that, you need to teach it truely 'think', and thats the tough one.
Intelligence is more than being able to react to black&white scenarios. In chess, there may be thousands of paths that seem all equal. Human will never think them all through, he will have intuition, 'tunnel vision'. Computer has only 1 option - try all through and while at that assign cost/benefit values for later decision. How it weighs values is defined 100% by algoritm, preprogrammed and 100% defined to avoid failure upon hitting unaccounted situation. Human decides without having even idea of cost/benefit, he may switch algoritms on the fly, create a new one, etc. Computer chess can never byitself create a new algoritm. It all must be preprogrammed.
Chess-engine has only one thing: capacity to play chess. It has no capacity to think.

To create an algoritm, intelligence of the creator must be higher than intelligence of an algoritm. Notice the contradiction. How on earth does intelligence in human brain develop is mystery. It does impossible - monkey mind writes physics textbooks, monkey creates einsteins.

Yes, "training of existing program". How's that different from what we do to man-made computers?

The computer within my skull has been faced with quite a few paradoxes, and I'm still alive. If I see that they can't be resolved, then I call their irresolvability a resolution. (Paradox applied to paradox, in case you didn't notice.)
We don't train computers, we define them. They have no room for error, and that means no room for selfintelligence. Your skull is not finished yet, it develops, creates its own program. That will stop somewhere in your life, and after that you'd be 'old school' guy.

When humans have children, the children are endowed with a complex (genetic) program, which the children then expand on for the rest of their lives. How is this different from a man-made computer (setting aside the fact that a child's genetic programming is much more complex than a man-made computer's).
Children are endowed with BIOS only. The rest gets created by them. They have capacity to reprogram themselves, and not only logically, but also physically redesign the 'computer'. DNA has only very small amount of 'functions' needed to sustain the body. Intelligence gets birth inside brain, with only partial help from outside. The quality of stuff that comes from outside, pretty much shapes the limits of a brain. There is a reason for saying 'food for mind'.

What kind of sense does that make? If they are more complex, then they are not as "dumb" as they used to be.
ok, imagine small heap of sand. simple. Imagine huge mountain. Magnitudes more sand, much more complex. Is the mountain 'smarter'? Ok, lets build laserguided, electronically controlled, programmable, infrared-sensitive ... mousetrap. Its more complex, is it smarter? Can it 'think'?
 

FZ+

1,550
2
If you take artificial in relative sense, then you have immediately problem, where to draw the line. Think about it. I'm not saying that I must be right, I'm merely saying that you 'could' look at it that way. Thats more than dictionary allows.
You don't draw the line. You accept that one thing is "more" artificial than the other. The drawing of the line is an absolutist concept.

Intelligence is more than being able to react to black&white scenarios. In chess, there may be thousands of paths that seem all equal. Human will never think them all through, he will have intuition, 'tunnel vision'. Computer has only 1 option - try all through and while at that assign cost/benefit values for later decision. How it weighs values is defined 100% by algoritm, preprogrammed and 100% defined to avoid failure upon hitting unaccounted situation. Human decides without having even idea of cost/benefit, he may switch algoritms on the fly, create a new one, etc. Computer chess can never byitself create a new algoritm. It all must be preprogrammed.
Now that is unjustified. How do you know that humans don't consider options? What evidence suggests is that humans go down a list of most likely things to do that they considered previous, and sorted against how successful they are before etc. Why do you believe that humans don't use algorithms he learned from experience too?
And computers can self-program. Read up on genetic and evolutionary algorithms, and the newest chess computer "Deep Junior". Most chess computers now feature the production of their own algorithm to counter the thinking algorithms, or "habits" of individual players.

Chess-engine has only one thing: capacity to play chess. It has no capacity to think.
That's because you have restricted it to playing chess. Learning computer programs can learn to do anything - including thinking.

To create an algoritm, intelligence of the creator must be higher than intelligence of an algoritm.
How do you judge the intelligence of an algorithm? Do you sit it through an IQ test? And the fact remains that evolution allows algorithms to be developed without an intelligence at all.
And from this, why can't a computer think?

How on earth does intelligence in human brain develop is mystery. It does impossible - monkey mind writes physics textbooks, monkey creates einsteins.
It isn't a mystery. The whole is greater than the sum of the parts. Basic tenet of neural networking.

We don't train computers, we define them. They have no room for error, and that means no room for selfintelligence. Your skull is not finished yet, it develops, creates its own program. That will stop somewhere in your life, and after that you'd be 'old school' guy.
But some computers can be trained. Some computers are made with a capacity for learning. Instinct doesn't train mankind - it defines us. But we can rise beyond that instinct can't we? Then computers can rise beyond their programming.
Children are endowed with BIOS only. The rest gets created by them. They have capacity to reprogram themselves, and not only logically, but also physically redesign the 'computer'. DNA has only very small amount of 'functions' needed to sustain the body. Intelligence gets birth inside brain, with only partial help from outside. The quality of stuff that comes from outside, pretty much shapes the limits of a brain. There is a reason for saying 'food for mind'.
But do they redesign themselves? I contend, for fun, that the external environment redesigns them. Like we may be the external environment in redesigning a computer.
DNA only has a small amount of functions? You must be joking. And in alot of the cases, we simply cannot exceed our programming - our hormones, instincts force us along. Children are endowed with a BIOS and operating system, and programmed along the way with software through the senses.

ok, imagine small heap of sand. simple. Imagine huge mountain. Magnitudes more sand, much more complex. Is the mountain 'smarter'? Ok, lets build laserguided, electronically controlled, programmable, infrared-sensitive ... mousetrap. Its more complex, is it smarter? Can it 'think'?
Imagine one cell. Can it think? Let make it more complicated, build a body. I think it can think now, can't it? Whether it becomes smarter depends on the form you make with it, as well as the complexity. If the complexity is arranged in a way to allow a mind, then it would be smarter. If a computer is made more complex with an aim to duplicate the mind, it can and will duplicate a mind.
 
476
0
Originally posted by FZ+
Now that is unjustified. How do you know that humans don't consider options? What evidence suggests is that humans go down a list of most likely things to do that they considered previous, and sorted against how successful they are before etc. Why do you believe that humans don't use algorithms he learned from experience too?
I didn't say any of that. not even remotely close. If you play chess by evaluating each possible move, you'll loose to 8yr old. If you can describe your algoritms, they'd like to hear you. Sure you do use some, but you also create them from experience.

And computers can self-program. Read up on genetic and evolutionary algorithms, and the newest chess computer "Deep Junior". Most chess computers now feature the production of their own algorithm to counter the thinking algorithms, or "habits" of individual players.
Self-program and algoritm creation are different. Viruses self-program. No big deal. What evolutionary algorithms? That what researches try out and then spend life to understand what happened, without any prior understanding?

And my point was precisely - we can give computers basis to evolve, the rest they have to do themselves. Our problem is to give even basis that survives.

How do you judge the intelligence of an algorithm? Do you sit it through an IQ test? And the fact remains that evolution allows algorithms to be developed without an intelligence at all.
And from this, why can't a computer think?
So far it isn't allowed to. Intelligence of algoritm? Dunno, maybe you measure whole ratio to the sum of parts using it...
I didn't say computer can't think. I said it doesn't think, and won't in quite some time.

It isn't a mystery. The whole is greater than the sum of the parts. Basic tenet of neural networking.
Sorry, I didn't know that here are ppl who build 'thinking machines' like pancakes... If you say so, then its soo easy...

But some computers can be trained. Some computers are made with a capacity for learning. Instinct doesn't train mankind - it defines us. But we can rise beyond that instinct can't we? Then computers can rise beyond their programming.
Instincts define us. Computers cannot be trained. Some, research computers can be 'trained' experimentally, which is basically to adapt to specific case. Hell, my automatic tranny adapts to me - trained. Yes we rise beyond our instincts. No, computers cannot rise beyond their programming, its limited by our programming. When we learn to program them evolutionary mumbojumbo that is capable of learning, evolving, and creating its own algoritms, all that without fatal crash bugs, and that doesn't get limited at 3yr old retard level of IQ, then yes, computers will take off.
But notice major point: we don't program them to be intelligent, we give them capacity to become intelligent. And thats where artificial part ends. Our job is to provide best package possible, the rest is beyond us.

Imagine one cell. Can it think? Let make it more complicated, build a body. I think it can think now, can't it? Whether it becomes smarter depends on the form you make with it, as well as the complexity. If the complexity is arranged in a way to allow a mind, then it would be smarter. If a computer is made more complex with an aim to duplicate the mind, it can and will duplicate a mind.
nice logic. Dinasaurs were abit larger than humans, that means helluva lot more cells, don't you think? I imagine brontos playing chess now. You seem to think that 'If the complexity is arranged in a way allow a mind' is the tiny-tiny step just left to be made. Its the major problem. Much bigger than making 100THz Pentium78 and putting some few thousands together, or making a manned mission to jupiter.

There was a massively paraller computer made, Thinking Machines, it had 65K small cpus, like neural net. Awesome box. It died. Guess why? Human mind is unable to program so massively parallel computer. See, its not a technology issue.
 

FZ+

1,550
2
I didn't say any of that. not even remotely close. If you play chess by evaluating each possible move, you'll loose to 8yr old. If you can describe your algoritms, they'd like to hear you. Sure you do use some, but you also create them from experience.
And why can't computers do the same?

What evolutionary algorithms? That what researches try out and then spend life to understand what happened, without any prior understanding?
Basically, it's to couple a degree of randomness to a selector that lets the best surivive. These then develop themselves to produce their own programming - often even better than a human could think up. That's programming, creating their own algorithms, without a programmer.

I didn't say computer can't think. I said it doesn't think, and won't in quite some time.
Ah... sorry... confusion on my part.

Should...read...whole...post....

Sorry, I didn't know that here are ppl who build 'thinking machines' like pancakes... If you say so, then its soo easy...
Not like pancakes no... Heh. But I think we have a theoretical beginning on how such a machine can work, modeled on our understanding of the brain at the moment. Currently it is thought that thought is the product of a sort of mini-evolution in the brain, in terms of competing impulses and stuff. It's a mega-network of small, semi-intelligent bits that work together in harmony.

But notice major point: we don't program them to be intelligent, we give them capacity to become intelligent. And thats where artificial part ends. Our job is to provide best package possible, the rest is beyond us.
Hmm... interesting point.... I sort of agree - the thoughts themselves cannot be determined by programming, as then it wouldn't be intelligence.

nice logic. Dinasaurs were abit larger than humans, that means helluva lot more cells, don't you think? I imagine brontos playing chess now. You seem to think that 'If the complexity is arranged in a way allow a mind' is the tiny-tiny step just left to be made. Its the major problem. Much bigger than making 100THz Pentium78 and putting some few thousands together, or making a manned mission to jupiter.
There was a massively paraller computer made, Thinking Machines, it had 65K small cpus, like neural net. Awesome box. It died. Guess why? Human mind is unable to program so massively parallel computer. See, its not a technology issue.
Yes and no. It has been suggested that intelligence is based on the ratio of body mass to brain size. While the bronto have a big brain, most of the cells were busy just keeping it alive, and hence a very small part of it was freed up in the frontal lobe to do things not essential to survival... like certain board games. While the bronto brain had complexity, that complexity was specialised for other things.
No, that is not a tiny step. Sorry if I gave that impression. The problem is the core principles of our current computing. Our current computing is based strictly on formal logic - a determined processor, held at each point by programming. Extend that, and you just get the same thing, the problem being that the programs themselves can't keep up. WE can't keep up.
When is parallel computing successful then? When we exploit the fact that there are such sub-divisions. Notice the tremendous success of the SETI@Home project. For the success of AI, we need

(a) processors with a degree of randomness - perhaps quantum computing, or fuzzy logic can be the key,
(b) processors that function individually, but can communicate, and
(c) a whole new dynamic to programming - no longer instructions, but situations to react to.

That's what I meant by "aim to duplicate the mind".
 
3,754
2
Originally posted by wimms
You still haven't decided, do you mean artificial in absolute sense or only relative. You mix them all the time.
Yes, well it is one word.

You just switched relative meaning to absolute, and you get that bone was produced naturally. Who cares? Alien/foreign does not describe that it mimics and replaces your genuine bone. That what the word artificial is used for.
Then any child's intelligence must be artificial, right?

Take Nature as closed system, take human as standing out of it
That's what everyone does, and it's what I'm arguing against.

So, what is teachings of parents to a baby? Artificial experience. You don't like that usage of word? Don't use it.
No, if I don't like that usage of the word, then it may (just may) be incorrect usage.

You object invain. Its not variations, noone cares about them. Its conditions, methodologies, algoritms. Formulas, not values. Chess engines have been 100% defined. Its too simple game.
I'm sorry, but you've misunderstood chess enormously! Chess is a system that is part mathematics, part variations, part dynamics, and part "chess logic".

For computer, all is defined when you describe rules of game and it has unambigous non-contradictionary perception of it. If other player makes false move, and it wasn't defined as false, computer would get lost. If rules of game were contradicitonary, it would be unable to resolve many situations.
I wouldn't even be able to play a game with contradictory rules!

Chess-engine has only one thing: capacity to play chess. It has no capacity to think.
If you think that you can play Chess without thinking, then you're obviously not a chess player (to say the least).

To create an algoritm, intelligence of the creator must be higher than intelligence of an algoritm. Notice the contradiction. How on earth does intelligence in human brain develop is mystery. It does impossible - monkey mind writes physics textbooks, monkey creates einsteins.
"Intelligence must be higher than intelligence of an algorithm"? What intelligence does an algorithm have?

ok, imagine small heap of sand. simple. Imagine huge mountain. Magnitudes more sand, much more complex. Is the mountain 'smarter'?
No, just like a whale isn't smarter than me (that's debateable, but... :wink:).

Ok, lets build laserguided, electronically controlled, programmable, infrared-sensitive ... mousetrap. Its more complex, is it smarter? Can it 'think'?
A molecule is more complex than an atom but it can't think. The more complex the brain (Central Processing Unit), the more intelligent the computer.
 
476
0
Originally posted by Mentat
Then any child's intelligence must be artificial, right?
Not child's intelligence, child's knowledge. You cannot impart intelligence to a child. It is something that can only be inherent to a brain.

I'm sorry, but you've misunderstood chess enormously! Chess is a system that is part mathematics, part variations, part dynamics, and part "chess logic".
If you think that you can play Chess without thinking, then you're obviously not a chess player (to say the least).
Yeah, I was afraid you'll get hot on word 'simple'. You missed my point though. Simple are chess rules. They are defined. How you play within those rules, is completely undefined and different story. There are only so many rough recommendations, the rest is pure exercise of intellect. Not so for computers, they play the game with brute force computations. They use set of preprogrammed algorithms. An those algorithms are again fully defined, not what they do. Without that comp runs risk to hit undefined path of computation and crash. What they do is to find out best strategy from zillions of possible and weigh them. Comps can do magnitudes more of such comparisons than human, and so its matter of time only when computer will play chess better than humans, technically.

Chess has very many facets, including strategy to exhaust the other intellect, emotional warfare, recognition of opponents experience in famous games, his preference in them, defraud, style. All those things go far outside scope of just computation. To use them, one has to engage much more and much wider intellect than its needed just on the board. Computers can't and don't do that. Often thats when they loose pathetically.
I understand chess enough. Decade ago I was engaged in coding few chess algoritms. Pathetic by our standards, but I learned that its very easy to deceive human into thinking he has intelligent opponent.
Heck, you can buy chess 'calculator' for few bucks and it'll play damn good chess for any average mortal.

Finally, don't forget that chess-engines can do only one thing - play chess. They are specialised, can't drive a car, etc. How much effort has been put into creating deep junior? Enormous. Yet, its just chess.

I wouldn't even be able to play a game with contradictory rules!
Yes, but you could argue with the opponent up to some exchange of fire or cache, and sort it out so that you can restart the game or one of you is 'dead'. In any case, you won't 'crash'.

"Intelligence must be higher than intelligence of an algorithm"? What intelligence does an algorithm have?
What intelligence does genetic evolutionary algorithms in deep junior have? Go figure. But when you put many algorithms to compete with each other, some will come out winners. 'intelligence' in quotes of course, don't go into semantics again.

A molecule is more complex than an atom but it can't think. The more complex the brain (Central Processing Unit), the more intelligent the computer.
I'm confused all the way. How do you specifically understand 'complex'? Have it never occured to you, that complexity of CPU has nothing to do with its ability to think? It is prerequisite, yes, but not the answer. In computer, its software that defines its behaviour. Hardware is just tools to do that within reasonable timeframe. Haven't you ever met a person who is dumber than his dog? Have you heard that even best of humans engage just a fraction of capacity of our brain? Think about it, so big brain, and possibly so dumb person.
 
476
0
Originally posted by FZ+
And why can't computers do the same?
Because that needs capacity for abstraction and rationalization from incomplete, uncertain and ambigous experience, often nth degree of speculation. Not even in close future.
To improve algoritm, one needs to get out of the loop and above the problem, to see abstractly the problem and applicability of existing solution. From 'inside', there is only one option - action-reaction type selfadaption. It is limiting possibilities enormously.

Basically, it's to couple a degree of randomness to a selector that lets the best surivive. These then develop themselves to produce their own programming - often even better than a human could think up. That's programming, creating their own algorithms, without a programmer.
Right. But do you think thats a cheap bingo? What if none survives? What if such runaway selfprogramming hits a deadend, deadlock? What if criteria for best selection is stupid? What if 'often' means actually 1 in a million?

Here lies the complexity of another kind. We need to invent algoritms that evolve, and don't reach their limits at levels of 3yr old retard, but go further. Thats difficult. Its like trying to give 1 million balls specific initial accelerations, so that in few weeks after we let them go, they form any wanted geometric shapes after zillions of times colliding with each other. Difficulty is not in making evolution possible, difficulty is in making sure it doesn't stop after few 'generations' and dissolve into random noise.
Anyway, its fascinating area of research. You really start looking at computers as at living things.

But I think we have a theoretical beginning on how such a machine can work, modeled on our understanding of the brain at the moment. Currently it is thought that thought is the product of a sort of mini-evolution in the brain, in terms of competing impulses and stuff. It's a mega-network of small, semi-intelligent bits that work together in harmony.
I'm not uptodate in brain research, dunno. I only think that its quite abit more difficult than that. Holographic-like memory. Operating not on bit levels, but with abstract images of concepts, thus to a degree insensitive to individual 'bits'. Technical/technological details are imo secondary, although issues are not - we don't want intelligent computer to be size of moon. With details I mean its to a degree irrelevant how we deliver signals, its how they interact algorithmically that counts.

When brain needs to solve problem it hasn't faced before, it somehow engages parts of brain unrelated to the problem, creates sort of many-to-one focus. By that, it extends problem at hand over whole baggage of experience it has, and by that detaches from inside the problem to 'above' it. And even though experience it uses isn't applicable, it helps to make the forced induction, turn the scale. After such induction, solution becomes part of experience, used in future to make induction in other areas. This way, bigger whole is used to make progress in smaller areas. Its something like physicists engaging experience from mathmatics, or even from behaviour of cats as species. Sometimes, unexpected analogies can help. So yes, there is evolution.

What computers lack, is that critical threshold of abstract experience, they can't detach from the problem at hand, they remain 'inside'. They can't solve problems without external help.

If you think about brain, then alot of experience it has comes from external world, from books, talks, education. All that knowledge becomes part of 'baggage' used for thinking. None of the knowledge is rocksolid, alot of the 'baggage' can be actually crap. Can you sense the reason for 'beliefs' and religion here? Its normal, and any brain consists of beliefs, its actually the only possible way, without that, it can't be interally consistent. If that brakes, mind goes nuts. Human programming is basically imparting the 'baggage' so that his belief system changes in wanted direction. Very dangerous to consistency if converting mind to opposites. But thats basically what our education is. It speeds up our 'baggage' creation and makes sure its in same direction as rest of mankind.

I wonder if we'll ever see computer that 'believes' it is Napoleon..

Our current computing is based strictly on formal logic - a determined processor, held at each point by programming. Extend that, and you just get the same thing, the problem being that the programs themselves can't keep up. WE can't keep up.

When is parallel computing successful then? When we exploit the fact that there are such sub-divisions. Notice the tremendous success of the SETI@Home project. For the success of AI, we need

(a) processors with a degree of randomness - perhaps quantum computing, or fuzzy logic can be the key,
(b) processors that function individually, but can communicate, and
(c) a whole new dynamic to programming - no longer instructions, but situations to react to.
SETI is not good example. It is brute force approach. It scales well, but has very little capacity for intelligence. On rest I agree with you, but with few reservations:
I don't think any randomness is needed. Uncertainty of external world is enough. Fuzzy logic is imo indeed a key, and quantum computing, but not because of uncertainty, but because we need enormous amount of computing power in crazily small volumes. Superpositions of quantums gives specific benefits that can't be mimiced well with digital computers. Individual processors is also not required, its more like processes that count.
And yes, that programming. Its the major beast to concur, its complete change in our programming thinking. Although underlying may be oldstyle instructions. Its just too much programming and too complex for ordinary mortal to grasp.

Anyway, our PC's are nowhere near all of the above, they are dumb. Chess-engines are extremely limited in scope, always 'inside', and thus only adaptive, not creative. I believe that quantum computing would be the kicker for computer intelligence, mainly because its impossible to apply classical programming there, so new kind of programming will be forced to have sharp advances.
 
3,754
2
Originally posted by wimms
Not child's intelligence, child's knowledge. You cannot impart intelligence to a child. It is something that can only be inherent to a brain.
That's completely incorrect, as 1) intelligence and knowledge are not so easily seperable; and 2) it is a "Nature" over "Nurture" view-point, when (IMO) the reality lies within the mix of both.

Yeah, I was afraid you'll get hot on word 'simple'. You missed my point though. Simple are chess rules. They are defined. How you play within those rules, is completely undefined and different story. There are only so many rough recommendations, the rest is pure exercise of intellect. Not so for computers, they play the game with brute force computations. They use set of preprogrammed algorithms. An those algorithms are again fully defined, not what they do.
I happen to know that there are chess engines that learn from each game, and form patterns (like a human does). I also happen to know that a certain amount understanding of the dynamics of chess (which is what usually seperates human chess-players from chess engines) can be imparted to a man-made computer.

Chess has very many facets, including strategy to exhaust the other intellect, emotional warfare, recognition of opponents experience in famous games, his preference in them, defraud, style. All those things go far outside scope of just computation. To use them, one has to engage much more and much wider intellect than its needed just on the board. Computers can't and don't do that. Often thats when they loose pathetically.
Actually, they can be made to do just that. In fact, ChessMaster 7000 has made it so you can play against Josh Waitzkin's style of you so choose, and this is from the computer's having memorized Josh's games (like a human would do, only better), and developing his style.

Anyway, the Chess engine discussion is really pointless, as I was just trying to make the point that a man-made computer's intelligence is/can-be a mirror of our own.

Yes, but you could argue with the opponent up to some exchange of fire or cache, and sort it out so that you can restart the game or one of you is 'dead'. In any case, you won't 'crash'.
People who are obssesed with chess might. And that's really the point, as any computer (artificial or otherwise) can simply ignore chess if it so chooses, unless it is pre-disposed to care about nothing else.

Also, please remember that I am not trying to put man-made computers at an equal level of complexity to a human. I'm merely saying that there intelligence is not "less genuine", even if it has less of it (after all, an infant has greater intelligence than an adult, but that doesn't mean that the adult's intelligence is less genuine).

I'm confused all the way. How do you specifically understand 'complex'? Have it never occured to you, that complexity of CPU has nothing to do with its ability to think? It is prerequisite, yes, but not the answer.
How can it's complexity have nothing to do with it, and yet be a pre-requisite?

In computer, its software that defines its behaviour. Hardware is just tools to do that within reasonable timeframe.
Much like a human.

Haven't you ever met a person who is dumber than his dog? Have you heard that even best of humans engage just a fraction of capacity of our brain? Think about it, so big brain, and possibly so dumb person.
It's not about how big the brain is (a whale's brain is bigger than mine). It's about how complex the brain is, and how fast it computes.
 
62
0
Originally posted by wimms
What computers lack, is that critical threshold of abstract experience, they can't detach from the problem at hand, they remain 'inside'. They can't solve problems without external help.
I'm puzzled by your use of the word 'inside,' or the idea of 'detatching from the problem at hand.' This, as Mentat described it pretty well, would be only a matter of complexity. Computers don't look at the broader picture because they were never programmed to do that. 'Detatching' would be just another aspect of the computer's AI software.
By the way, you are aware that there are programs that have effectively found matematical proofs better than their human counterparts? With that in mind, is 'detatching from the problem at hand' really all that necessary?
 

Dj Sneaky Whiskers

Originally posted by C0mmie
By the way, you are aware that there are programs that have effectively found matematical proofs better than their human counterparts? With that in mind, is 'detatching from the problem at hand' really all that necessary?
The debate on the significance of using computers in finding rigorous proofs within mathematics is still raging, as such, it's hard to see how they could be categorised as 'better' since a few mathematicians are unsure as to whether or not they can validly be taken to constitute as 'proofs'. Unless of course you're talking about 'applied' proofs, where you're just jugling around already established functions (i.e. using cos^2 x + sin^2 x = 1 to prove cos 2x = 1 - 2sin^2 x) rather than rigorous proofs in analysis. In the latter case, the subject of what can be called a proof is one which requires a mainly philosophical approach, and, as such, possible conclusions are going to be argued over for a long time to come yet.
 
476
0
Originally posted by C0mmie
I'm puzzled by your use of the word 'inside,' or the idea of 'detatching from the problem at hand.' This, as Mentat described it pretty well, would be only a matter of complexity. Computers don't look at the broader picture because they were never programmed to do that. 'Detatching' would be just another aspect of the computer's AI software.
We mix all the time what computer can do today, and what computer could do in future. Also, I have no idea what you ppl mean by word 'complex'. Forest is complex, water is complex, Sun is damn complex, heck, single atom is complex, brain of a child is complex. That does not guarantee intelligence. Child without guidance and sensations will never become intelligent. Just being complex is not enough.

'Detaching' yet another aspect of programming? Don't you find that to 'detach', AI must have those other 'aspects' to consult with? okay, suppose. But have you heard of combinatorial explosion? Do you have any idea of how many 'yet another aspects' would one need to program computer intelligence, and how much effort each one of them would require? Sure, one might think that its possible. But consider this: it has been statistically observed to be almost law, that for every 1000 lines of code, humans will make at least 1 bug. For every N bugfixes, they'll make another bug. When calculated, this leads to observation that limit for coding single software is around 2-5M lines (don't remember exact figures). After that, code will have thousands of bugs and each bugfix will make more bugs than there are fixed. This is no technological problem, but pure human factor, humans are unable to keep concentration with that vast task. Windows 2000 alone has more than 1M lines of code. Sure, thats not hard limit, but it hints on very real limit. While coding AI would require magnitudes more lines of code. So there's a good reason to assume that humans will never be able to program CI, its beyond our mental ability today.

Originally posted by Mentat
How can it's complexity have nothing to do with it, and yet be a pre-requisite?
How can 'Existence' have nothing to do with thinking, and yet be a pre-requisite? How can you program if there is nothing to program, or you can't fit your program?

You seem to believe that if we stuff together few tons of neurons, then that mass would become intelligent?
Complexity is bunch of related simplicities. Any complex thing, if you look close enough will look quite simple. Its not the bunch, and not simplicities that matter, but relations. Stupid relations won't yield intelligence, but noise. Random relations would yield random noise. Its the relations that are difficult to invent. Creating smart relations is not much different from programming software, subject to similar human limitations.

I don't understand how can you take complexity like for granted, 'only a matter of complexity'. The complexity of relations you might mean is the product, bunch of simplicities is prerequisite. To say that its 'only a matter of complexity' is to say nothing. Its like saying that intelligence is just matter of intelligence.

To create a thinking computer, we have to go from bunch of simplicities all the way to intelligence. We are likely to fail. We could create a playground for chaos that would create its relations byitself via repeating selfinteractions, and might start evolution on its own. This also has odds of 1 against 10Ebillions to succeed. It takes enormous effort to create human level intelligence, not 'just complexity'.
Somehow, we'll succeed, hopefully. But we are nowhere near that. Today we have only fake imitations that deceive us, without having even remotely ability to think.
 

LogicalAtheist

For mentat to saw "humans are computers" is absurd.

You also failed to define any other type of intelligence other than artifical. If there is a need, which there is, to identifiy artificalness, then what's the other option.

I think this is a classic case of someone being emotional over words.

Humans defined artificial to mean a certain thing. Now you're going and saying it doesn't mean that. Well, yes it does because that's the meaning WE GAVE IT. In other words, you're saying the brown cow isn't brown, or better yet a cow isn't a cow.

Regardless of the fact that you're speaking of intelligence, computers, or humans, your argument can quickly be ruined as I did above, using simple logic.

Even if it wasn't so easily ruined, you failed to define types of intelligence. Even so, you would have given them definitions that were emotionally satisfying to you, and not definitions that fit with what humans consider to be their meanings.

DOn't take offense to this. It's just that I read your post and so quickly formulated all this, it's my fingers that took the time here not my brain.

If you DO take offense, turn that energy into removing the emotions you have in this case and use instead logic to criticize your own argument

GOod luck
 

FZ+

1,550
2
You also failed to define any other type of intelligence other than artifical. If there is a need, which there is, to identifiy artificalness, then what's the other option.
Non human intelligence. Intelligence that has nothing to do with mankind.

Humans defined artificial to mean a certain thing. Now you're going and saying it doesn't mean that. Well, yes it does because that's the meaning WE GAVE IT. In other words, you're saying the brown cow isn't brown, or better yet a cow isn't a cow.
No sire. Rather that the terms we gave it are open to interpretation. One person's idea of brown may well be different from anothers. The common visualised idea of artifical is indeed contradictory with the strict dictionary definition.
 
3,754
2
Originally posted by wimms
Also, I have no idea what you ppl mean by word 'complex'. Forest is complex, water is complex, Sun is damn complex, heck, single atom is complex, brain of a child is complex. That does not guarantee intelligence. Child without guidance and sensations will never become intelligent. Just being complex is not enough.
I'm talking about a complex system, whose purpose is to think. The more complex this thinking machine is, the better it will be at it.

But have you heard of combinatorial explosion? Do you have any idea of how many 'yet another aspects' would one need to program computer intelligence, and how much effort each one of them would require?
Combinatorial explosion can occur in the brain of human as well.

How can 'Existence' have nothing to do with thinking, and yet be a pre-requisite? How can you program if there is nothing to program, or you can't fit your program?
I didn't say that "existence" had nothing to do with thinking. I asked you why you said that complexity had nothing to do with thinking, and then said that it was a pre-requisite.

You seem to believe that if we stuff together few tons of neurons, then that mass would become intelligent?
I'm going to try to be even more clear than I have been (if that's possible): Complex does not mean massive.

Complexity is bunch of related simplicities. Any complex thing, if you look close enough will look quite simple. Its not the bunch, and not simplicities that matter, but relations.
Exactly, so if the relations are complex enough, the computer can be as intelligent as a human, right?

I don't understand how can you take complexity like for granted, 'only a matter of complexity'. The complexity of relations you might mean is the product, bunch of simplicities is prerequisite. To say that its 'only a matter of complexity' is to say nothing. Its like saying that intelligence is just matter of intelligence.
Hey, you were the one that said that complexity alone couldn't produce intelligence equal to a humans.

To create a thinking computer, we have to go from bunch of simplicities all the way to intelligence. We are likely to fail. We could create a playground for chaos that would create its relations byitself via repeating selfinteractions, and might start evolution on its own. This also has odds of 1 against 10Ebillions to succeed. It takes enormous effort to create human level intelligence, not 'just complexity'.
The enormous human effort would be toward achieving complexity.

Somehow, we'll succeed, hopefully. But we are nowhere near that. Today we have only fake imitations that deceive us, without having even remotely ability to think.
"Think" doesn't mean "be creative", like you are making it seem. "Think" just means to "compute".
 
3,754
2
Originally posted by LogicalAtheist
For mentat to say "humans are computers" is absurd.
Ok. Perhaps you should explain why, as I don't see it's absurdity. Are our brains not computers?

You also failed to define any other type of intelligence other than artifical. If there is a need, which there is, to identifiy artificalness, then what's the other option.
Well, in the sense of "artificial" that I was talking about at the beginning of this thread, the other option is "genuine".

I think this is a classic case of someone being emotional over words.
Yes, I am the PF king of semantics, and don't you forget it .

Humans defined artificial to mean a certain thing. Now you're going and saying it doesn't mean that. Well, yes it does because that's the meaning WE GAVE IT. In other words, you're saying the brown cow isn't brown, or better yet a cow isn't a cow.
What did humans define "artificial" to mean, pray tell?

Regardless of the fact that you're speaking of intelligence, computers, or humans, your argument can quickly be ruined as I did above, using simple logic.
You did nothing of the kind.

DOn't take offense to this.
Of course not.

It's just that I read your post and so quickly formulated all this, it's my fingers that took the time here not my brain.
Let's try thinking before typing, to avoid "swiss cheese logic", shall we (no offense)?

GOod luck
Thanks. :smile:
 

LogicalAtheist

Although mentat has made false statements surely unknowlingly as I don't doubt honest, and although he has broken rule 3 which I am about to add to my signature because it's very common now too, I have a tidbit to help his case, least i hope so.

Brains of species are defined as complex or advanced based on the amount of neaural connections, nerves I should say thus the amount of possibilities, and other similiar critera. Of course taking notice of which parts of the brain a species has is also important.

Recently I saw some tech shows involving advanced intelligence. They compared these machines to humans brains by the ammount of "electrical" connections in them VS the brain.

IMPORTANT INFORMATION.

The brain "does what it does" using two things

1. chemical interactions
2. electrical interactions.

The brain is entirely nerve cells. WITHIN a nerve cell, is the electrical reaction. BETWEEN NERVE cells is where the chemical interactions take place.

Now, since a given pice of technology would NOT need to have this between space, it could surely immitate a brain using only electricity.

suprised? Yes, indeed. Our brain processes are completely ELECTROCHEMICAL. That is the summation of processes by which we "think" and do other things like access memories or gain new ones.

So, I ask you, is a machine that uses only electrochemical interactions to process information artifical?

If so, then are we artifical? Despite what many wish to believe, that is all we consist of brainwise.

Perhaps the difference lies in the "nature" of what we can do with our brains?

I machine can access information, accept new information, change information and output information.

When can it USE information to create ITS OWN NEW INFORMATION?

Ever heard of DEEP BLUE?

DEEP BLUE is the considered currently the most intelligent machine. It is a machine that is designed to do one thing. play chess better than any human being, and never lose.

Yes it does lose. It also is designed to think. rather than having an entire program of all moves possible, and simply accessing that. It does this "thinking" thing.

It uses electricity, no chemicals.

Is it artifical, and if so why? If you need information on in just use google and type "deep blue" and add perhaps chess..I'll post this in it's own place because I suppose it will add alot to mentat's questions and propsitons..........
 
476
0
I am out. This is becoming noise exchange. See you in other threads.
 

LogicalAtheist

Noise exchange? to me this is a good subject, one that, with common definitions, is certainly open to opinion, sorry you don't feel that way!

Here is a link to the revived part citing his original post and my post which I feel will open up some new thoughts for people interested:


https://www.physicsforums.com/showthread.php?s=&postid=26086#post26086

Enjoy!
 
3,754
2
Originally posted by LogicalAtheist
Although mentat has made false statements surely unknowlingly as I don't doubt honest, and although he has broken rule 3 which I am about to add to my signature because it's very common now too, I have a tidbit to help his case, least i hope so.
How about we remember that Mentat reads these as well, and speak at him, if we have something to say to him. There are few things more patronizing than speaking of someone in the third person, while they are present (and thus, being a teen and a middle child, there are few things I hate worse).

Recently I saw some tech shows involving advanced intelligence. They compared these machines to humans brains by the ammount of "electrical" connections in them VS the brain.

IMPORTANT INFORMATION.

The brain "does what it does" using two things

1. chemical interactions
2. electrical interactions.

The brain is entirely nerve cells. WITHIN a nerve cell, is the electrical reaction. BETWEEN NERVE cells is where the chemical interactions take place.

Now, since a given pice of technology would NOT need to have this between space, it could surely immitate a brain using only electricity.

suprised?
Not at all, any biology textbook would tell you the same.

So, I ask you, is a machine that uses only electrochemical interactions to process information artifical?
It can be, if you define "artificial" as man-made. It's not any less genuine than any other process of thinking (of course, I can't think of any other processes of thinking, but the fact remains).

If so, then are we artifical? Despite what many wish to believe, that is all we consist of brainwise.
Please post which definition of "artificial" you are discussing, so that I can respond better.

Perhaps the difference lies in the "nature" of what we can do with our brains?
What "difference"? The "difference" between what and what?

I machine can access information, accept new information, change information and output information.

When can it USE information to create ITS OWN NEW INFORMATION?
I've seen humans do this all of the time. Humans are machines (unless you deny that our bodies are collections of parts that serve purposes), and they can do that. Besides, even if you exclude humans, you are talking about creativity, not intelligence.

Ever heard of DEEP BLUE?

DEEP BLUE is the considered currently the most intelligent machine. It is a machine that is designed to do one thing. play chess better than any human being, and never lose.

Yes it does lose. It also is designed to think. rather than having an entire program of all moves possible, and simply accessing that. It does this "thinking" thing.

It uses electricity, no chemicals.

Is it artifical, and if so why? If you need information on in just use google and type "deep blue" and add perhaps chess..I'll post this in it's own place because I suppose it will add alot to mentat's questions and propsitons..........
Deep Blue is man-made, and so it is that kind of "artificial". I don't think it's intelligence is any less genuine than a human's, if that's what you mean by "artificial".
 

Related Threads for: Not Artificial

  • Posted
Replies
3
Views
1K
  • Posted
Replies
6
Views
3K
  • Posted
Replies
16
Views
4K
  • Posted
2
Replies
32
Views
7K
  • Posted
Replies
7
Views
2K
  • Posted
Replies
3
Views
2K
  • Posted
Replies
20
Views
14K
  • Posted
Replies
5
Views
2K

Hot Threads

Top