Artificial Intelligence vs Human Intelligence

  • Thread starter C0mmie
  • Start date
  • #26
Greetz,

00. Saying that technology won't be capable of doing something, is more likely doomed to become senseless than saying it will.

01. If we assume that a mind is the software loaded on neural hardware, making one is not difficult.

02. From the technial point of view, we needn't understand how a mind works to produce (not replicate) one. The main point with neural networks is that they can learn.

03. I was told there are basically these types of control systems: PID (Proportional-Integral-Differential), Fuzzy, Neural, Neuro-Fuzzy.

04. PID systems revolve around an equation that defines the system's behavior. If we are to make a mind we have to find the equation governing it. Since a mind is chaotic, its governing equation is too complex to be found.

05. Fuzzy systems' core are rule sets. It will be necessary to define the set of rules governing the system, eg a simple thermostat may be described with two rules like "turn on if temp. < 20' C" and "turn off if temp. > 20' C." Using fuzzy logic and its truth values which are chosen over a range of real numbers instead of T/F it will be able to respond accurately enough in face of predicted situations. A mind has too many rules for that purpose.

06. Neural networks are the right way for making a mind. In fact, it was the study of the mind that led into the study of neural networks. A neural network can't be represented with a set of rules or an equation. After an artificial neural network is set up (don't ask me how, I merely know it's a hard job), it will be "trained" to do the right thing when it encounters certain situations. All is necessary is a large enough set of (stimulus, response) pairs and a feedback loop (this is the reward/punishment, just like training a pet) to change the neural interconnection weights according to some rules (advanced mathematics used in Neurosciencea and AI). After a while, the network starts to converge to a certain value of accuracy (high value for sure) which can be later optimized with further training.

07. No one really knows what happens inside the neural netwrok. This is the twist. You don't know what a mind is but that it is made on a neural network. All you need to make a "human" mind is to set up a neural network to the size and complexity of brain, embed subtle algorithms to mimic the process of learning (changing weights and/or interconnections and/or other factors), give it "human" input (ie it must have a set of sensory input lines equal to those of a human mind) and train it. These are cumbersome tasks but aren't far-fetched.

08. It isn't necessary that AI acts the human way. A mind can be set up easily but what it will turn into can hardly be predicted. It may be anything, it may even have features that human minds lack.

09. Since AI can noway be compared to human beings, AI ethics can't also be compared to human ethics. An AI entity may perceive what we can't, may "think" in an inhuman context, may lack the slightest similarity to human mind and finally may "do" something that we can't even imagine. It is surely bound to some rules but these aren't human rules. Consequently, it is indeed sentient while it may be unable to communicate with human beings and vice versa.

10. Humanoid AI is Sci-Fi. It is much harder to make humanoid AI than to make powerful AI. We already have neural networks used in OCR applications and many decision making situations but it is impossible for a human being to understand what an OCR network understands. They do the work but they're different from us, they aren't human.

11. That AI is not human doesn't mean that human beings are in any way superior/inferior but that human beings and non-humanoid AI are incomparable. They're far too different and lack almost any common points.
 
  • #27
1,944
0
01. If we assume that a mind is the software loaded on neural hardware, making one is not difficult.

02. From the technial point of view, we needn't understand how a mind works to produce (not replicate) one. The main point with neural networks is that they can learn.
I think this is too simple a definition of a mind and does not reflect the heart of this thread. By this definition ants and worms have minds.

05. Fuzzy systems' core are rule sets. It will be necessary to define the set of rules governing the system, eg a simple thermostat may be described with two rules like "turn on if temp. < 20' C" and "turn off if temp. > 20' C." Using fuzzy logic and its truth values which are chosen over a range of real numbers instead of T/F it will be able to respond accurately enough in face of predicted situations. A mind has too many rules for that purpose.
If the mind you are talking about is that of a worm or ant, then it may not be too many rules for the purpose. Again, you are loosing the heart of the thread with details. I brought up fuzzy logic as merely an example of the progress of alternative logics in the area of AI where classical logic fails.

06. Neural networks are the right way for making a mind. In fact, it was the study of the mind that led into the study of neural networks. A neural network can't be represented with a set of rules or an equation.
Not yet they can't, this is the mathematics of the future which M-theory and other cutting edge mathematics are attempting to address.

09. Since AI can noway be compared to human beings, AI ethics can't also be compared to human ethics.
This statement defies the evidence which I stated earlier. AI has proven impossible for experts in various fields to distinguish from other human experts. Thus it can be compared to the human mind and it is only natural to do so.
 
  • #28
62
0
1.
This statement defies the evidence which I stated earlier. AI has proven impossible for experts in various fields to distinguish from other human experts. Thus it can be compared to the human mind and it is only natural to do so.
Just because you can't destinguish the way AI performs a specific task from that of a human, doesn't mean the same ethics apply to AI as to us. Personally, I would have no problem torturing an intelligent robot for the sole purpose of entertainment, but the same laws don't apply to humans.


2.
It may be appropriate to bring in Marvin Minsky's theory into this thread. I am no expert on this subject, but from what I understand his theory centers around the following: The mind as a whole is a "society" composed of smaller entities he calls "agents." Each agent by itself is incapable of thought or conciousness, but it's from the interactions of these different agents that we get thought and self-awareness.
(Correct me if I'm wrong)
 
Last edited:
  • #29
1,944
0
Just because you can't destinguish the way AI performs a specific task from that of a human, doesn't mean the same ethics apply to AI as to us. Personally, I would have no problem torturing an intelligent robot for the sole purpose of entertainment, but the same laws don't apply to humans.
This is the same point I brought up earlier. A Pantheist might treat their computer better while a Theist might see this as silly. The classic Disney film, "Herbie the Love Bug" is a case in point. Morality is just not the same from one person to next much less one culture to the next, and science does not really have a morality of its own. As machines continue to progress it looks likely even their designers and builders will no longer be able to say with any scientific certainty whether they are conscious or not. When that happens, some may view your torture as inhumane while others see it as innocuous just as occured with slaves.
 
  • #30
138
1
www.kurzweilAI.com [Broken]
 
Last edited by a moderator:
  • #31
2,225
0
Just think, if we make the machines perfect enough, we may not even need any humans. What would be the point, if they could do everything we could do, but better? What was the name of that movie, The Stepford Wives?

Whose purpose would it really serve to do such a thing? The big manufacturing conglomerates? What would be the point of people hanging around if there was nothing "useful" for them to do?

Just numbers and machines ...
 
  • #32
1. For wuliheron:
I think this is too simple a definition of a mind and does not reflect the heart of this thread. By this definition ants and worms have minds.
If we accept that the mind is somehow firmly related to the brain and that the brain is a neural network and that a neural network can be studied with Neuroscience then the definition is a just and fair one. Besides, I don't make definitions to reflect the heart of the thread I make them to reflect the heart of my idea and to let other participants talk about my idea rationally/mystically/passionately/[beep].

Do you have a problem with an ant or a worm having a mind? What matters is magnitude and complexity. Homo sapiens has one of the largest brains in proportion to its body (Dolphins, Blue Whales, Gorillas and other apes accompany) and one of the most complicated in terms of interconnections and single-neuron behavior. There's absolutely no problem with an ant having a mind, a little one at least. And there's absolutely no problem with a human being having a lesser mind compared to a Dolphin. We have only one peculiarity: our new abilities of toolmaking and that's why we're sometimes called Homo Faber (partly because of the flexible oddly-positioned thumb on our hands).
If the mind you are talking about is that of a worm or ant, then it may not be too many rules for the purpose. Again, you are loosing the heart of the thread with details. I brought up fuzzy logic as merely an example of the progress of alternative logics in the area of AI where classical logic fails.
You're underestimating a worm! The simplest living being on this planet is unimaginably complex.

Details are necessary here to aviod confusion where there's really no special problem. One says "we can't make a mind," the other says "oh! we can." The way out is detailed description of how we can do it.

Boolean logic doesn't fail, it stops where it reaches the limits for which it's been designed. What fails is the individual who uses Boolean logic for the purpose it wan't made to serve for.
Not yet they can't, this is the mathematics of the future which M-theory and other cutting edge mathematics are attempting to address.
I can't understand your point here. I was told M-theory is the summation the current five variants of string theory (none of which I know the least about) that is supposed to do for all of them, is that wrong?

If that's right, then M-theory is Physics and not mathematics. Even if it is accompanied by a new branch of mathematics it can't claim dominance in the territory where Chaos theory reigns. Chaos theory implies that the chaotic system is governed by an equation out of our reach. This equation is out of reach either because it's trespassed a certain level of complexity that makes it out of reach forever (determinist view inside chaos theory) or because our current processing power doesn't afford its necessities (non-determinism insdie chaos theory). These two choices make the point of divide in Chaos theory users.
This statement defies the evidence which I stated earlier. AI has proven impossible for experts in various fields to distinguish from other human experts. Thus it can be compared to the human mind and it is only natural to do so.
These indistinguishable machines are called Turing machines, I guess. A perfect Turing machine hasn't been built yet. Please give a link to a source that confirms the making of a perfect Turing machine.

After all, I just can't understand exactly what you're opposing in my post. Will you please tell me what part of it I should re-think.
 
  • #33
1,944
0
We have only one peculiarity: our new abilities of toolmaking and that's why we're sometimes called Homo Faber (partly because of the flexible oddly-positioned thumb on our hands).
Humans have the ability to run after a moving target and throw a rock of swing a stick at it. No other animal has this ability which is only partly due to our opposible thumb. Chimps use sticks all the time, sometimes to sneak up behind each other and bash their brains in, but they can't sprint. We can sprint fast enough to catch a horse. Notably, the opposible thumb and the agile physiology to do these things evolved before the human brain nearly tripled in size during the last ice age and one third of our brain is devoted to vision.

I can't understand your point here. I was told M-theory is the summation the current five variants of string theory (none of which I know the least about) that is supposed to do for all of them, is that wrong?
M-theory is a purely mathematical theory without a shred of physical evidence to support it. The only reason it is considered a cutting edge physics theory is because has swallowed whole the mathematics of every other theory devised to date and is doing the same for mathematics in general.

I've come across some mathematicians who've complained that physicists are allowed much more freedom in their work than mathematicians. Unlike the mathematicians who are constrained to rigorous proofs, physical theorists are allowed significantly more freedom. As I said, physics has diverged from mathematics a great deal in the last century and is only now beginning to converge again thanks to M-theory. If you want to know more I suggest Machio Kaku's book, "Hyperspace".

These indistinguishable machines are called Turing machines, I guess. A perfect Turing machine hasn't been built yet. Please give a link to a source that confirms the making of a perfect Turing machine.

A Turing machine is a whole nother animal. All I've said is programs have passed the Turing Test.

http://cogsci.ucsd.edu/~asaygin/tt/ttest.html [Broken]
 
Last edited by a moderator:
  • #34
62
0
Originally posted by wuliheron A Turing machine is a whole nother animal. All I've said is programs have passed the Turing Test.
Manuel_Silvio war right regarding the Turing machine. Every year the contestant closest to passing the test is awarded the prize, but the test has yet to be passed.
 
  • #35
1,944
0
Parts of the Turing test have been passed, and contestents are getting closer than ever before. The main point is that people can be fooled by such programs which are still in the infantile stage of development.
 
  • #36
3,762
2
A problem that I'm noticing, in this discussion, is the degratory use of the word "machine". By definition, the human body is a machine, so calling a man-made computer a "machine" doesn't make any distinction betwixt it and us.
 
  • #37
2,225
0
Originally posted by Mentat
A problem that I'm noticing, in this discussion, is the degratory use of the word "machine". By definition, the human body is a machine, so calling a man-made computer a "machine" doesn't make any distinction betwixt it and us.
What's the diff? Just like your computer, the human mind processes information too ...
 
  • #38
3,762
2
Originally posted by Iacchus32
What's the diff? Just like your computer, the human mind processes information too ...
That's what I was saying. It is the other members that are differentiating between the two. I was trying to correct that.
 
  • #39
greeneagle3000
?

what's the big idea? it's our fault if they go against us, we mad it that way.
but as radiohead says, we need people with hammers.
 

Related Threads on Artificial Intelligence vs Human Intelligence

Replies
274
Views
29K
  • Last Post
2
Replies
32
Views
7K
  • Last Post
Replies
16
Views
4K
  • Last Post
Replies
5
Views
3K
Replies
26
Views
4K
Replies
5
Views
261
  • Last Post
Replies
12
Views
4K
Replies
42
Views
4K
Replies
21
Views
2K
Top