Is Artificial Intelligence Truly Artificial?

Click For Summary
The discussion centers around the definition and implications of "artificial intelligence" versus "genuine intelligence." Participants debate whether intelligence exhibited by man-made computers should be considered "artificial" in a derogatory sense, as many believe it lacks genuineness. Key points include the definitions of "intelligence" and "artificial," with some arguing that man-made intelligence can possess the same qualities as organic intelligence. The conversation touches on the nature of free will, consciousness, and the complexity of human cognition compared to computer functions. Several participants express the view that labeling computer intelligence as artificial reflects human arrogance, as humans themselves are products of natural processes. The dialogue also explores philosophical questions about the nature of creation, consciousness, and whether future advancements could lead to computers that are indistinguishable from human intelligence. Overall, the thread highlights a deep philosophical inquiry into the essence of intelligence and the implications of human creation.

Is the intelligence, of a man-made computer, artificial?

  • No, I agree with Mentat

    Votes: 4 26.7%
  • No, but for different reasons than Mentat's

    Votes: 6 40.0%
  • Yes, because...

    Votes: 5 33.3%

  • Total voters
    15
  • #61
Originally posted by wimms
You still haven't decided, do you mean artificial in absolute sense or only relative. You mix them all the time.

Yes, well it is one word.

You just switched relative meaning to absolute, and you get that bone was produced naturally. Who cares? Alien/foreign does not describe that it mimics and replaces your genuine bone. That what the word artificial is used for.

Then any child's intelligence must be artificial, right?

Take Nature as closed system, take human as standing out of it

That's what everyone does, and it's what I'm arguing against.

So, what is teachings of parents to a baby? Artificial experience. You don't like that usage of word? Don't use it.

No, if I don't like that usage of the word, then it may (just may) be incorrect usage.

You object invain. Its not variations, no one cares about them. Its conditions, methodologies, algoritms. Formulas, not values. Chess engines have been 100% defined. Its too simple game.

I'm sorry, but you've misunderstood chess enormously! Chess is a system that is part mathematics, part variations, part dynamics, and part "chess logic".

For computer, all is defined when you describe rules of game and it has unambigous non-contradictionary perception of it. If other player makes false move, and it wasn't defined as false, computer would get lost. If rules of game were contradicitonary, it would be unable to resolve many situations.

I wouldn't even be able to play a game with contradictory rules!

Chess-engine has only one thing: capacity to play chess. It has no capacity to think.

If you think that you can play Chess without thinking, then you're obviously not a chess player (to say the least).

To create an algoritm, intelligence of the creator must be higher than intelligence of an algoritm. Notice the contradiction. How on Earth does intelligence in human brain develop is mystery. It does impossible - monkey mind writes physics textbooks, monkey creates einsteins.

"Intelligence must be higher than intelligence of an algorithm"? What intelligence does an algorithm have?

ok, imagine small heap of sand. simple. Imagine huge mountain. Magnitudes more sand, much more complex. Is the mountain 'smarter'?

No, just like a whale isn't smarter than me (that's debateable, but... :wink:).

Ok, let's build laserguided, electronically controlled, programmable, infrared-sensitive ... mousetrap. Its more complex, is it smarter? Can it 'think'?

A molecule is more complex than an atom but it can't think. The more complex the brain (Central Processing Unit), the more intelligent the computer.
 
Physics news on Phys.org
  • #62
Originally posted by Mentat
Then any child's intelligence must be artificial, right?
Not child's intelligence, child's knowledge. You cannot impart intelligence to a child. It is something that can only be inherent to a brain.

I'm sorry, but you've misunderstood chess enormously! Chess is a system that is part mathematics, part variations, part dynamics, and part "chess logic".
If you think that you can play Chess without thinking, then you're obviously not a chess player (to say the least).
Yeah, I was afraid you'll get hot on word 'simple'. You missed my point though. Simple are chess rules. They are defined. How you play within those rules, is completely undefined and different story. There are only so many rough recommendations, the rest is pure exercise of intellect. Not so for computers, they play the game with brute force computations. They use set of preprogrammed algorithms. An those algorithms are again fully defined, not what they do. Without that comp runs risk to hit undefined path of computation and crash. What they do is to find out best strategy from zillions of possible and weigh them. Comps can do magnitudes more of such comparisons than human, and so its matter of time only when computer will play chess better than humans, technically.

Chess has very many facets, including strategy to exhaust the other intellect, emotional warfare, recognition of opponents experience in famous games, his preference in them, defraud, style. All those things go far outside scope of just computation. To use them, one has to engage much more and much wider intellect than its needed just on the board. Computers can't and don't do that. Often that's when they loose pathetically.
I understand chess enough. Decade ago I was engaged in coding few chess algoritms. Pathetic by our standards, but I learned that its very easy to deceive human into thinking he has intelligent opponent.
Heck, you can buy chess 'calculator' for few bucks and it'll play damn good chess for any average mortal.

Finally, don't forget that chess-engines can do only one thing - play chess. They are specialised, can't drive a car, etc. How much effort has been put into creating deep junior? Enormous. Yet, its just chess.

I wouldn't even be able to play a game with contradictory rules!
Yes, but you could argue with the opponent up to some exchange of fire or cache, and sort it out so that you can restart the game or one of you is 'dead'. In any case, you won't 'crash'.

"Intelligence must be higher than intelligence of an algorithm"? What intelligence does an algorithm have?
What intelligence does genetic evolutionary algorithms in deep junior have? Go figure. But when you put many algorithms to compete with each other, some will come out winners. 'intelligence' in quotes of course, don't go into semantics again.

A molecule is more complex than an atom but it can't think. The more complex the brain (Central Processing Unit), the more intelligent the computer.
I'm confused all the way. How do you specifically understand 'complex'? Have it never occurred to you, that complexity of CPU has nothing to do with its ability to think? It is prerequisite, yes, but not the answer. In computer, its software that defines its behaviour. Hardware is just tools to do that within reasonable timeframe. Haven't you ever met a person who is dumber than his dog? Have you heard that even best of humans engage just a fraction of capacity of our brain? Think about it, so big brain, and possibly so dumb person.
 
  • #63
Originally posted by FZ+
And why can't computers do the same?
Because that needs capacity for abstraction and rationalization from incomplete, uncertain and ambigous experience, often nth degree of speculation. Not even in close future.
To improve algoritm, one needs to get out of the loop and above the problem, to see abstractly the problem and applicability of existing solution. From 'inside', there is only one option - action-reaction type selfadaption. It is limiting possibilities enormously.

Basically, it's to couple a degree of randomness to a selector that let's the best surivive. These then develop themselves to produce their own programming - often even better than a human could think up. That's programming, creating their own algorithms, without a programmer.
Right. But do you think that's a cheap bingo? What if none survives? What if such runaway selfprogramming hits a deadend, deadlock? What if criteria for best selection is stupid? What if 'often' means actually 1 in a million?

Here lies the complexity of another kind. We need to invent algoritms that evolve, and don't reach their limits at levels of 3yr old retard, but go further. Thats difficult. Its like trying to give 1 million balls specific initial accelerations, so that in few weeks after we let them go, they form any wanted geometric shapes after zillions of times colliding with each other. Difficulty is not in making evolution possible, difficulty is in making sure it doesn't stop after few 'generations' and dissolve into random noise.
Anyway, its fascinating area of research. You really start looking at computers as at living things.

But I think we have a theoretical beginning on how such a machine can work, modeled on our understanding of the brain at the moment. Currently it is thought that thought is the product of a sort of mini-evolution in the brain, in terms of competing impulses and stuff. It's a mega-network of small, semi-intelligent bits that work together in harmony.
I'm not uptodate in brain research, dunno. I only think that its quite abit more difficult than that. Holographic-like memory. Operating not on bit levels, but with abstract images of concepts, thus to a degree insensitive to individual 'bits'. Technical/technological details are imo secondary, although issues are not - we don't want intelligent computer to be size of moon. With details I mean its to a degree irrelevant how we deliver signals, its how they interact algorithmically that counts.

When brain needs to solve problem it hasn't faced before, it somehow engages parts of brain unrelated to the problem, creates sort of many-to-one focus. By that, it extends problem at hand over whole baggage of experience it has, and by that detaches from inside the problem to 'above' it. And even though experience it uses isn't applicable, it helps to make the forced induction, turn the scale. After such induction, solution becomes part of experience, used in future to make induction in other areas. This way, bigger whole is used to make progress in smaller areas. Its something like physicists engaging experience from mathmatics, or even from behaviour of cats as species. Sometimes, unexpected analogies can help. So yes, there is evolution.

What computers lack, is that critical threshold of abstract experience, they can't detach from the problem at hand, they remain 'inside'. They can't solve problems without external help.

If you think about brain, then a lot of experience it has comes from external world, from books, talks, education. All that knowledge becomes part of 'baggage' used for thinking. None of the knowledge is rocksolid, a lot of the 'baggage' can be actually crap. Can you sense the reason for 'beliefs' and religion here? Its normal, and any brain consists of beliefs, its actually the only possible way, without that, it can't be interally consistent. If that brakes, mind goes nuts. Human programming is basically imparting the 'baggage' so that his belief system changes in wanted direction. Very dangerous to consistency if converting mind to opposites. But that's basically what our education is. It speeds up our 'baggage' creation and makes sure its in same direction as rest of mankind.

I wonder if we'll ever see computer that 'believes' it is Napoleon..

Our current computing is based strictly on formal logic - a determined processor, held at each point by programming. Extend that, and you just get the same thing, the problem being that the programs themselves can't keep up. WE can't keep up.

When is parallel computing successful then? When we exploit the fact that there are such sub-divisions. Notice the tremendous success of the SETI@Home project. For the success of AI, we need

(a) processors with a degree of randomness - perhaps quantum computing, or fuzzy logic can be the key,
(b) processors that function individually, but can communicate, and
(c) a whole new dynamic to programming - no longer instructions, but situations to react to.
SETI is not good example. It is brute force approach. It scales well, but has very little capacity for intelligence. On rest I agree with you, but with few reservations:
I don't think any randomness is needed. Uncertainty of external world is enough. Fuzzy logic is imo indeed a key, and quantum computing, but not because of uncertainty, but because we need enormous amount of computing power in crazily small volumes. Superpositions of quantums gives specific benefits that can't be mimiced well with digital computers. Individual processors is also not required, its more like processes that count.
And yes, that programming. Its the major beast to concur, its complete change in our programming thinking. Although underlying may be oldstyle instructions. Its just too much programming and too complex for ordinary mortal to grasp.

Anyway, our PC's are nowhere near all of the above, they are dumb. Chess-engines are extremely limited in scope, always 'inside', and thus only adaptive, not creative. I believe that quantum computing would be the kicker for computer intelligence, mainly because its impossible to apply classical programming there, so new kind of programming will be forced to have sharp advances.
 
  • #64
Originally posted by wimms
Not child's intelligence, child's knowledge. You cannot impart intelligence to a child. It is something that can only be inherent to a brain.

That's completely incorrect, as 1) intelligence and knowledge are not so easily seperable; and 2) it is a "Nature" over "Nurture" view-point, when (IMO) the reality lies within the mix of both.

Yeah, I was afraid you'll get hot on word 'simple'. You missed my point though. Simple are chess rules. They are defined. How you play within those rules, is completely undefined and different story. There are only so many rough recommendations, the rest is pure exercise of intellect. Not so for computers, they play the game with brute force computations. They use set of preprogrammed algorithms. An those algorithms are again fully defined, not what they do.

I happen to know that there are chess engines that learn from each game, and form patterns (like a human does). I also happen to know that a certain amount understanding of the dynamics of chess (which is what usually separates human chess-players from chess engines) can be imparted to a man-made computer.

Chess has very many facets, including strategy to exhaust the other intellect, emotional warfare, recognition of opponents experience in famous games, his preference in them, defraud, style. All those things go far outside scope of just computation. To use them, one has to engage much more and much wider intellect than its needed just on the board. Computers can't and don't do that. Often that's when they loose pathetically.

Actually, they can be made to do just that. In fact, ChessMaster 7000 has made it so you can play against Josh Waitzkin's style of you so choose, and this is from the computer's having memorized Josh's games (like a human would do, only better), and developing his style.

Anyway, the Chess engine discussion is really pointless, as I was just trying to make the point that a man-made computer's intelligence is/can-be a mirror of our own.

Yes, but you could argue with the opponent up to some exchange of fire or cache, and sort it out so that you can restart the game or one of you is 'dead'. In any case, you won't 'crash'.

People who are obssesed with chess might. And that's really the point, as any computer (artificial or otherwise) can simply ignore chess if it so chooses, unless it is pre-disposed to care about nothing else.

Also, please remember that I am not trying to put man-made computers at an equal level of complexity to a human. I'm merely saying that there intelligence is not "less genuine", even if it has less of it (after all, an infant has greater intelligence than an adult, but that doesn't mean that the adult's intelligence is less genuine).

I'm confused all the way. How do you specifically understand 'complex'? Have it never occurred to you, that complexity of CPU has nothing to do with its ability to think? It is prerequisite, yes, but not the answer.

How can it's complexity have nothing to do with it, and yet be a pre-requisite?

In computer, its software that defines its behaviour. Hardware is just tools to do that within reasonable timeframe.

Much like a human.

Haven't you ever met a person who is dumber than his dog? Have you heard that even best of humans engage just a fraction of capacity of our brain? Think about it, so big brain, and possibly so dumb person.

It's not about how big the brain is (a whale's brain is bigger than mine). It's about how complex the brain is, and how fast it computes.
 
  • #65
Originally posted by wimms
What computers lack, is that critical threshold of abstract experience, they can't detach from the problem at hand, they remain 'inside'. They can't solve problems without external help.

I'm puzzled by your use of the word 'inside,' or the idea of 'detatching from the problem at hand.' This, as Mentat described it pretty well, would be only a matter of complexity. Computers don't look at the broader picture because they were never programmed to do that. 'Detatching' would be just another aspect of the computer's AI software.
By the way, you are aware that there are programs that have effectively found matematical proofs better than their human counterparts? With that in mind, is 'detatching from the problem at hand' really all that necessary?
 
  • #66
Originally posted by C0mmie
By the way, you are aware that there are programs that have effectively found matematical proofs better than their human counterparts? With that in mind, is 'detatching from the problem at hand' really all that necessary?

The debate on the significance of using computers in finding rigorous proofs within mathematics is still raging, as such, it's hard to see how they could be categorised as 'better' since a few mathematicians are unsure as to whether or not they can validly be taken to constitute as 'proofs'. Unless of course you're talking about 'applied' proofs, where you're just jugling around already established functions (i.e. using cos^2 x + sin^2 x = 1 to prove cos 2x = 1 - 2sin^2 x) rather than rigorous proofs in analysis. In the latter case, the subject of what can be called a proof is one which requires a mainly philosophical approach, and, as such, possible conclusions are going to be argued over for a long time to come yet.
 
  • #67
Originally posted by C0mmie
I'm puzzled by your use of the word 'inside,' or the idea of 'detatching from the problem at hand.' This, as Mentat described it pretty well, would be only a matter of complexity. Computers don't look at the broader picture because they were never programmed to do that. 'Detatching' would be just another aspect of the computer's AI software.
We mix all the time what computer can do today, and what computer could do in future. Also, I have no idea what you ppl mean by word 'complex'. Forest is complex, water is complex, Sun is damn complex, heck, single atom is complex, brain of a child is complex. That does not guarantee intelligence. Child without guidance and sensations will never become intelligent. Just being complex is not enough.

'Detaching' yet another aspect of programming? Don't you find that to 'detach', AI must have those other 'aspects' to consult with? okay, suppose. But have you heard of combinatorial explosion? Do you have any idea of how many 'yet another aspects' would one need to program computer intelligence, and how much effort each one of them would require? Sure, one might think that its possible. But consider this: it has been statistically observed to be almost law, that for every 1000 lines of code, humans will make at least 1 bug. For every N bugfixes, they'll make another bug. When calculated, this leads to observation that limit for coding single software is around 2-5M lines (don't remember exact figures). After that, code will have thousands of bugs and each bugfix will make more bugs than there are fixed. This is no technological problem, but pure human factor, humans are unable to keep concentration with that vast task. Windows 2000 alone has more than 1M lines of code. Sure, that's not hard limit, but it hints on very real limit. While coding AI would require magnitudes more lines of code. So there's a good reason to assume that humans will never be able to program CI, its beyond our mental ability today.

Originally posted by Mentat
How can it's complexity have nothing to do with it, and yet be a pre-requisite?
How can 'Existence' have nothing to do with thinking, and yet be a pre-requisite? How can you program if there is nothing to program, or you can't fit your program?

You seem to believe that if we stuff together few tons of neurons, then that mass would become intelligent?
Complexity is bunch of related simplicities. Any complex thing, if you look close enough will look quite simple. Its not the bunch, and not simplicities that matter, but relations. Stupid relations won't yield intelligence, but noise. Random relations would yield random noise. Its the relations that are difficult to invent. Creating smart relations is not much different from programming software, subject to similar human limitations.

I don't understand how can you take complexity like for granted, 'only a matter of complexity'. The complexity of relations you might mean is the product, bunch of simplicities is prerequisite. To say that its 'only a matter of complexity' is to say nothing. Its like saying that intelligence is just matter of intelligence.

To create a thinking computer, we have to go from bunch of simplicities all the way to intelligence. We are likely to fail. We could create a playground for chaos that would create its relations byitself via repeating selfinteractions, and might start evolution on its own. This also has odds of 1 against 10Ebillions to succeed. It takes enormous effort to create human level intelligence, not 'just complexity'.
Somehow, we'll succeed, hopefully. But we are nowhere near that. Today we have only fake imitations that deceive us, without having even remotely ability to think.
 
  • #68
For mentat to saw "humans are computers" is absurd.

You also failed to define any other type of intelligence other than artifical. If there is a need, which there is, to identifiy artificalness, then what's the other option.

I think this is a classic case of someone being emotional over words.

Humans defined artificial to mean a certain thing. Now you're going and saying it doesn't mean that. Well, yes it does because that's the meaning WE GAVE IT. In other words, you're saying the brown cow isn't brown, or better yet a cow isn't a cow.

Regardless of the fact that you're speaking of intelligence, computers, or humans, your argument can quickly be ruined as I did above, using simple logic.

Even if it wasn't so easily ruined, you failed to define types of intelligence. Even so, you would have given them definitions that were emotionally satisfying to you, and not definitions that fit with what humans consider to be their meanings.

DOn't take offense to this. It's just that I read your post and so quickly formulated all this, it's my fingers that took the time here not my brain.

If you DO take offense, turn that energy into removing the emotions you have in this case and use instead logic to criticize your own argument

GOod luck
 
  • #69
You also failed to define any other type of intelligence other than artifical. If there is a need, which there is, to identifiy artificalness, then what's the other option.
Non human intelligence. Intelligence that has nothing to do with mankind.

Humans defined artificial to mean a certain thing. Now you're going and saying it doesn't mean that. Well, yes it does because that's the meaning WE GAVE IT. In other words, you're saying the brown cow isn't brown, or better yet a cow isn't a cow.
No sire. Rather that the terms we gave it are open to interpretation. One person's idea of brown may well be different from anothers. The common visualised idea of artifical is indeed contradictory with the strict dictionary definition.
 
  • #70
Originally posted by wimms
Also, I have no idea what you ppl mean by word 'complex'. Forest is complex, water is complex, Sun is damn complex, heck, single atom is complex, brain of a child is complex. That does not guarantee intelligence. Child without guidance and sensations will never become intelligent. Just being complex is not enough.

I'm talking about a complex system, whose purpose is to think. The more complex this thinking machine is, the better it will be at it.

But have you heard of combinatorial explosion? Do you have any idea of how many 'yet another aspects' would one need to program computer intelligence, and how much effort each one of them would require?

Combinatorial explosion can occur in the brain of human as well.

How can 'Existence' have nothing to do with thinking, and yet be a pre-requisite? How can you program if there is nothing to program, or you can't fit your program?

I didn't say that "existence" had nothing to do with thinking. I asked you why you said that complexity had nothing to do with thinking, and then said that it was a pre-requisite.

You seem to believe that if we stuff together few tons of neurons, then that mass would become intelligent?

I'm going to try to be even more clear than I have been (if that's possible): Complex does not mean massive.[/color]

Complexity is bunch of related simplicities. Any complex thing, if you look close enough will look quite simple. Its not the bunch, and not simplicities that matter, but relations.

Exactly, so if the relations are complex enough, the computer can be as intelligent as a human, right?

I don't understand how can you take complexity like for granted, 'only a matter of complexity'. The complexity of relations you might mean is the product, bunch of simplicities is prerequisite. To say that its 'only a matter of complexity' is to say nothing. Its like saying that intelligence is just matter of intelligence.

Hey, you were the one that said that complexity alone couldn't produce intelligence equal to a humans.

To create a thinking computer, we have to go from bunch of simplicities all the way to intelligence. We are likely to fail. We could create a playground for chaos that would create its relations byitself via repeating selfinteractions, and might start evolution on its own. This also has odds of 1 against 10Ebillions to succeed. It takes enormous effort to create human level intelligence, not 'just complexity'.

The enormous human effort would be toward achieving complexity.

Somehow, we'll succeed, hopefully. But we are nowhere near that. Today we have only fake imitations that deceive us, without having even remotely ability to think.

"Think" doesn't mean "be creative", like you are making it seem. "Think" just means to "compute".
 
  • #71
Originally posted by LogicalAtheist
For mentat to say "humans are computers" is absurd.

Ok. Perhaps you should explain why, as I don't see it's absurdity. Are our brains not computers?

You also failed to define any other type of intelligence other than artifical. If there is a need, which there is, to identifiy artificalness, then what's the other option.

Well, in the sense of "artificial" that I was talking about at the beginning of this thread, the other option is "genuine".

I think this is a classic case of someone being emotional over words.

Yes, I am the PF king of semantics, and don't you forget it .

Humans defined artificial to mean a certain thing. Now you're going and saying it doesn't mean that. Well, yes it does because that's the meaning WE GAVE IT. In other words, you're saying the brown cow isn't brown, or better yet a cow isn't a cow.

What did humans define "artificial" to mean, pray tell?

Regardless of the fact that you're speaking of intelligence, computers, or humans, your argument can quickly be ruined as I did above, using simple logic.

You did nothing of the kind.

DOn't take offense to this.

Of course not.

It's just that I read your post and so quickly formulated all this, it's my fingers that took the time here not my brain.

Let's try thinking before typing, to avoid "swiss cheese logic", shall we (no offense)?

GOod luck

Thanks. :smile:
 
  • #72
Although mentat has made false statements surely unknowlingly as I don't doubt honest, and although he has broken rule 3 which I am about to add to my signature because it's very common now too, I have a tidbit to help his case, least i hope so.

Brains of species are defined as complex or advanced based on the amount of neaural connections, nerves I should say thus the amount of possibilities, and other similar critera. Of course taking notice of which parts of the brain a species has is also important.

Recently I saw some tech shows involving advanced intelligence. They compared these machines to humans brains by the amount of "electrical" connections in them VS the brain.

IMPORTANT INFORMATION.

The brain "does what it does" using two things

1. chemical interactions
2. electrical interactions.

The brain is entirely nerve cells. WITHIN a nerve cell, is the electrical reaction. BETWEEN NERVE cells is where the chemical interactions take place.

Now, since a given pice of technology would NOT need to have this between space, it could surely immitate a brain using only electricity.

suprised? Yes, indeed. Our brain processes are completely ELECTROCHEMICAL. That is the summation of processes by which we "think" and do other things like access memories or gain new ones.

So, I ask you, is a machine that uses only electrochemical interactions to process information artifical?

If so, then are we artifical? Despite what many wish to believe, that is all we consist of brainwise.

Perhaps the difference lies in the "nature" of what we can do with our brains?

I machine can access information, accept new information, change information and output information.

When can it USE information to create ITS OWN NEW INFORMATION?

Ever heard of DEEP BLUE?

DEEP BLUE is the considered currently the most intelligent machine. It is a machine that is designed to do one thing. play chess better than any human being, and never lose.

Yes it does lose. It also is designed to think. rather than having an entire program of all moves possible, and simply accessing that. It does this "thinking" thing.

It uses electricity, no chemicals.

Is it artifical, and if so why? If you need information on in just use google and type "deep blue" and add perhaps chess..I'll post this in it's own place because I suppose it will add a lot to mentat's questions and propsitons...
 
  • #73
I am out. This is becoming noise exchange. See you in other threads.
 
  • #74
Noise exchange? to me this is a good subject, one that, with common definitions, is certainly open to opinion, sorry you don't feel that way!

Here is a link to the revived part citing his original post and my post which I feel will open up some new thoughts for people interested:


https://www.physicsforums.com/showthread.php?s=&postid=26086#post26086

Enjoy!
 
  • #75
Originally posted by LogicalAtheist
Although mentat has made false statements surely unknowlingly as I don't doubt honest, and although he has broken rule 3 which I am about to add to my signature because it's very common now too, I have a tidbit to help his case, least i hope so.

How about we remember that Mentat reads these as well, and speak at him, if we have something to say to him. There are few things more patronizing than speaking of someone in the third person, while they are present (and thus, being a teen and a middle child, there are few things I hate worse).

Recently I saw some tech shows involving advanced intelligence. They compared these machines to humans brains by the amount of "electrical" connections in them VS the brain.

IMPORTANT INFORMATION.

The brain "does what it does" using two things

1. chemical interactions
2. electrical interactions.

The brain is entirely nerve cells. WITHIN a nerve cell, is the electrical reaction. BETWEEN NERVE cells is where the chemical interactions take place.

Now, since a given pice of technology would NOT need to have this between space, it could surely immitate a brain using only electricity.

suprised?

Not at all, any biology textbook would tell you the same.

So, I ask you, is a machine that uses only electrochemical interactions to process information artifical?

It can be, if you define "artificial" as man-made. It's not any less genuine than any other process of thinking (of course, I can't think of any other processes of thinking, but the fact remains).

If so, then are we artifical? Despite what many wish to believe, that is all we consist of brainwise.

Please post which definition of "artificial" you are discussing, so that I can respond better.

Perhaps the difference lies in the "nature" of what we can do with our brains?

What "difference"? The "difference" between what and what?

I machine can access information, accept new information, change information and output information.

When can it USE information to create ITS OWN NEW INFORMATION?

I've seen humans do this all of the time. Humans are machines (unless you deny that our bodies are collections of parts that serve purposes), and they can do that. Besides, even if you exclude humans, you are talking about creativity, not intelligence.

Ever heard of DEEP BLUE?

DEEP BLUE is the considered currently the most intelligent machine. It is a machine that is designed to do one thing. play chess better than any human being, and never lose.

Yes it does lose. It also is designed to think. rather than having an entire program of all moves possible, and simply accessing that. It does this "thinking" thing.

It uses electricity, no chemicals.

Is it artifical, and if so why? If you need information on in just use google and type "deep blue" and add perhaps chess..I'll post this in it's own place because I suppose it will add a lot to mentat's questions and propsitons...

Deep Blue is man-made, and so it is that kind of "artificial". I don't think it's intelligence is any less genuine than a human's, if that's what you mean by "artificial".
 
  • #76
I responded to this on the new page. admin feel free to lock this as it's on the sequel!
 
  • #77
Originally posted by LogicalAtheist
I responded to this on the new page. admin feel free to lock this as it's on the sequel!

Kerrie, I have no problem with that (locking this one, and having us continue on LogicalAtheist's thread). It will be easier to deal with than responding to two threads.

Of course, his doesn't have the poll, but that's rather unimportant now.
 

Similar threads

  • · Replies 18 ·
Replies
18
Views
4K
  • · Replies 1 ·
Replies
1
Views
4K