Can computers truly replicate human intelligence and emotions?

  • Thread starter Sorry!
  • Start date
  • Tags
    Computers
In summary, the conversation discusses various aspects of computers, including their intelligence, ability to think, capacity for emotions, and potential for evolution. The speakers debate whether computers can truly exhibit these traits or if they are simply programmed by humans. They also consider the possibility of computers having free will and the challenges of replicating the complexity of the human brain in artificial devices. Overall, the conversation highlights the ongoing advancements and potential of AI technology.
  • #36


Meh, this is why I originally posted this in the philosophy forum. Everyone here will just pressume we're speaking of the laptop I'm typing on. Or just not go to the depth that I was questioning to...
 
Computer science news on Phys.org
  • #37


I think the real problem is that there's no good definition of intelligence. Until we have that, all of this is just intangible, conjectural, emotional supposition.

We need a definition, a tangible definition, whereby we can point at something, do some sort of test, and measure the level of intelligence.

Perhaps an IQ test? But surely it would be fairly straightforward to design a computer program that would perform well on IQ tests.

Does anyone have any ideas for what might constitute a list of criteria for intelligence? What about this:

(1.) The entity can and does receive external stimuli and produce verifiable responses to these stimuli in such a way that the responses depend somewhat upon the stimuli.
(2.) The entity has some capacity for memory and recall; that is, responses to stimuli can draw on previous stimuli and previous responses. Such memory and recall must be evident, inasmuch as it must manifest itself.
(3.) The entity can adapt its memory by addition, modification, and deletion, to some satisfactory level; the capacity for addition need not be infinite but should be large, the capacity for modification need not be complete but should be broad, and the capacity for deletion need not be perfect but should be practical. The updating of memory should be observable per se or via responses to stimuli.
(4.) The entity's memory should include proportionally more information from stimuli than was presented to the entity ab initio.

Ideas? Thoughts?
 
  • #38


AUMathTutor said:
I think the real problem is that there's no good definition of intelligence. Until we have that, all of this is just intangible, conjectural, emotional supposition.

We need a definition, a tangible definition, whereby we can point at something, do some sort of test, and measure the level of intelligence.

Perhaps an IQ test? But surely it would be fairly straightforward to design a computer program that would perform well on IQ tests.

Does anyone have any ideas for what might constitute a list of criteria for intelligence? What about this:

(1.) The entity can and does receive external stimuli and produce verifiable responses to these stimuli in such a way that the responses depend somewhat upon the stimuli.
(2.) The entity has some capacity for memory and recall; that is, responses to stimuli can draw on previous stimuli and previous responses. Such memory and recall must be evident, inasmuch as it must manifest itself.
(3.) The entity can adapt its memory by addition, modification, and deletion, to some satisfactory level; the capacity for addition need not be infinite but should be large, the capacity for modification need not be complete but should be broad, and the capacity for deletion need not be perfect but should be practical. The updating of memory should be observable per se or via responses to stimuli.
(4.) The entity's memory should include proportionally more information from stimuli than was presented to the entity ab initio.

Ideas? Thoughts?
Yes, this was the purpose of my posting this; to gain insight into these areas of thought.

I don't see any problems with this idea of intelligence.

I remember a way of testing if a computer was 'intelligent' it went along these lines:

If a person were to be placed into a room with 2 phones. One phone goes to room A where a human is and the other phone goes to room B where a computer is. The person can call either room randomly and ask random questions. If the person can not tell which is the computer and which is the human then the computer must be thinking and be intelligent.

The problem with this is making computers talk lol :D But maybe if the computer was with a human in its room and the human gave the answer the computer gave then it would work...
 
  • #39


Sorry! said:
If the person can not tell which is the computer and which is the human then the computer must be thinking and be intelligent.
This is the famous Turing test. The problem with it is the onus it places on the tester to ask the right questions. Certainly if I ask the computer and the human to keep quiet, I won't know if the computer is intelligent or just unplugged.
 
  • #40


Or it could be text-based. Yes, this is the Turing test, or similar to it.

Then again, I think Dijkstra was right when he implied that computers don't need to think like humans do for what it is they're doing to be beneficial and impressive.
 
  • #41


Another problem with the Turing test is that it places too much emphasis on anthropomorphic qualities associated with intelligence. For instance, if you ask the question "How many fingers does a person have?", a person may say something like "5 on each hand, for a total of 10". A computer may say "It depends on the person; most people have 10 fingers, five on each of 2 hands, but your experience may differ."

The more intelligent of the two answers may be seen as "non-human". Also, things like wit and sarcasm may not be programmed into the computer, and people can pick up on things like this.

"How many librarians does it take to screw in a lightbulb?"
Person: "At least two, but you can't fit a librarian in a lightbulb."
Computer: "One librarian."
 
  • #42


zoobyshoe said:
In the matter of computers my reasoning is that they are not conscious so the issue of enslavement is absurd. You can no more "enslave" a computer than you could "set it free".
The reasoning here is too general. You can't enslave a human-meat burger either, and it can be evaluated that the burger has no conscious persona, even though it's biological consistency is comparable to that of a full blown person.

You can't enslave Microsoft Windows either (this isn't entirely true the other way around, but that's another story). However you can enslave a computer which has personal intentions and a wish to not be enslaved, to some marginal level of sophistication.

It's about what it does not what it is made of. A computer is nothing if not the process it is running.

Could you enslave a brain-dead human? This is an equivalent question of encapsulation rather than the resident information system which should be the point of the topic.

If you're going to make a comparison against a conscious human, you must do it against a conscious computer not an arbitrary Commodore machine with a program cassette of your choosing.

You could argue that a computer cannot attain a comparable level of functional sophistication and thus is not subject to an equivalent form of morality, however there would be no argument if this were known to be true. The entire question is, IF a computer could do this, would that computer be entitled to an equivalent level of personhood? The answer in such circumstances is, without any rational form of opposition, clearly yes.
 
  • #43


Negatron said:
You could argue that a computer cannot attain a comparable level of functional sophistication and thus is not subject to an equivalent form of morality, however there would be no argument if this were known to be true. The entire question is, IF a computer could do this, would that computer be entitled to an equivalent level of personhood? The answer in such circumstances is, without any rational form of opposition, clearly yes.
O.K. Thanks.
 
  • #44


To the point of what an intelligent computer would consist of. I would expect something largely similar to what a human brain consists of.

Large parallel networks for very thorough bayesian inference from sensory information, and a limbic system which establishes output motives for the inferred information. The big problem we have is the inherent abstraction of information in such a system, thus we cannot establish ALL functional qualities in such a system, merely major categoric tendencies. This means we cannot determine precisely what a system is doing by observing it, even though we can get probability measures.

The good news is that the brain is not particularly complicated. The genetic seed that establishes it's structure is quite small, and from analyzing the neocortex, it's essentially just a vastly repeating tree-like branching structure with an explicit, small, consistent and discrete number of layers. The complexity of the system is not so much in the structure but largely in the objective comprehension of the information absorbed by it. This isn't anything exclusive to the brain, we have known for decades that genetic algorithms produce small networks that server a purpose, yet it cannot be determined, even in a trivially small network, how the composition of the network allows it to perform the desired task, or how further information will alter it's behavior. In comp-sci terms, it's an undecidable problem. To know what it's going to do requires doing the same thing. To simplify, we can build it, but we will nevertheless not know precisely how it works. However there are proofs that this is true of any system we observe, not just neural networks, so incomplete understanding should not be a shocker, and as far as the scientific establishment has shown, is not a significant obstacle for progress.

The main complication in developing computer intelligence is processing power. We have FPGA devices now which can evolve neural systems of far higher complexity than anything in years prior, however this is still far insufficient for cognitive systems anywhere close to a human.

We could simply copy the human neural network into a computer, which I find to be the most promising immediate objective. It does not require an intractable level of computational genetic adaptation, this has already been done for us by nature. What it does require however is sophisticated automated interpretation of brain scans. There are numerous approaches to this ranging from TEM to subsurface optical scanning. THz radiation is a future prospect. And the trillion neurons nevertheless require a vast amount of computational resources. The rough approximation today is in the range of an Exaflop as far as the relevant computational features are concerned. IBM says this can be done within 10 years if sufficient funding is available. Considering that computational neuroscience funding has increased by a factor of over 20x in the last decade it wouldn't surprise me if such projects caught up to optimistic expectations of their 10 year potential.

IBM's Blue Brain, to be specific, is using a certain kind of evolutionary psychology approach to the problem, rather than explicit translation of physical data.

The human brain cannot take all the credit for the development of modern pattern recognition software. The development of much of this was disassociated with the field of neuroscience yet nevertheless can produce superior results to a human brain, such as determining that two subjects are likely equivalent in low resolution data samples, and can even make better decisions with temporal information, such as gait. Even though such developments cannot (yet) decide how to use this information to serve the better end of humanity, they nevertheless represent a substantial fraction of innate human capability, so it is reasonable to say that intelligence is a scale rather than an absolute attribute, and undoubtedly we have advanced dramatically on this scale in recent years.
 
Last edited:
  • #45


Negatron said:
See, this is my point about intangible reasoning. You can throw around the term "free will" and enslave or not enslave creatures of any kind as you see fit.

Sound logic doesn't hamper my enslavement of Puerto-Ricans much as it doesn't hamper your intention of enslavement of creatures of any another form, to which you apply a convenient label to appease your arbitrary qualifications.

You're fortunate enough to equate all -biological- humans, however those that do not have an explanation no worse than your own. I for one am convinced I have a soul and midgets do not. I don't care to evaluate what the presence of a soul would imply however, this is an unnecessary inconvenience to my suppositions.

Perhaps you should define some objective measures by which something qualifies for the right of freedom, which can be empirically evaluated, rather than rely on a poor philosophical dichotomy of no quantitative merit.

I like your face, ergo, you have free will, please move to the right.

You on the other hand have too much silicon in your cognitive hardware therefore have no free will and are not subject to the slightest bit of decency. Please move to the left and jump right into the fire pit.
You seem to be condescending over an issue that is entirely hypothetical for the foreseeable future. While I agree that some day it will very likely be an issue about where we draw the line, it isn't today. We can only guess what the issues may be; it will require an actual example whereby we can make a concrete decision.

So let's just make sure we keep it academic shall we?
 
  • #46


Just saw terminator salvation. It wasn't as good as I was expecting. Christian Bale isn't that great of an actor I think.
 
  • #47


Negatron said:
To simplify, we can build it, but we will nevertheless not know precisely how it works. However there are proofs that this is true of any system we observe, not just neural networks, so incomplete understanding should not be a shocker, and as far as the scientific establishment has shown, is not a significant obstacle for progress.
This is interesting. Can you give me some examples of these systems? I have never heard it asserted that we don't understand any systems precisely. (I'm not doubting it, just want a better grasp of what that assertion means.) Would a swinging pendulum constitute a "system", for example?
 
  • #48


I'll probably get flamed pretty hard for this, but I was under the impression that Godel said Mathematics was one such system. It's a radical interpretation but there you go.
 
  • #49


zoobyshoe said:
This is interesting. Can you give me some examples of these systems? I have never heard it asserted that we don't understand any systems precisely. (I'm not doubting it, just want a better grasp of what that assertion means.) Would a swinging pendulum constitute a "system", for example?
Gödel has been mentioned. The idea stems from this but can be generalized to mere observation and is quite intuitive, so I suspect that certain practical relationships to such understanding have been apparent for the longest time.

http://en.wikipedia.org/wiki/Undecidable_problem

There are certain theories that relate more specifically to simulations of systems and even simulations of simulations, where an observer cannot exactly replicate (or in other words derive all known truths) from a certain system, even though they could get arbitrarily close with exponentially increasing effort. Logarithmic level of completeness perhaps.

For example, it's not particularly hard to make a computer game, but to replicate one exactly by observation, you will find it an unattainable challenge. This is more true of neural networks as a result of their vastly more complex behavior, but again, the entire point is that none of this stops us from making them to any desired specification. Actually, all these theories only seem to suggest that apparent complexity is deceptive, and that emergent complexity in nature is a result of trivially simple rules, even though their behavior in all possible circumstances cannot be completely described.
 
  • #50


Negatron said:
Gödel has been mentioned. The idea stems from this but can be generalized to mere observation and is quite intuitive, so I suspect that certain practical relationships to such understanding have been apparent for the longest time.

http://en.wikipedia.org/wiki/Undecidable_problem

There are certain theories that relate more specifically to simulations of systems and even simulations of simulations, where an observer cannot exactly replicate (or in other words derive all known truths) from a certain system, even though they could get arbitrarily close with exponentially increasing effort. Logarithmic level of completeness perhaps.

For example, it's not particularly hard to make a computer game, but to replicate one exactly by observation, you will find it an unattainable challenge. This is more true of neural networks as a result of their vastly more complex behavior, but again, the entire point is that none of this stops us from making them to any desired specification. Actually, all these theories only seem to suggest that apparent complexity is deceptive, and that emergent complexity in nature is a result of trivially simple rules, even though their behavior in all possible circumstances cannot be completely described.

Thanks, Negatron.

The Wiki article is inpenetrable to me, but your last two paragraphs gave me a decent clue about how we could build a thing and still not know precisely how it works.
 
  • #51


Well to me this just means we can't replicate a system perfectly by just viewing it. If however as in an earlier post we had the original pendulum, which is a system. The someone must have known the 'absolute truth' about it since they had made it and it was the first of its kind so it is the basis for all replicas.

I don't know if I'm articulating my point properly hopefully someone understands lol.
 
  • #52


Sorry! said:
Well to me this just means we can't replicate a system perfectly by just viewing it. If however as in an earlier post we had the original pendulum, which is a system. The someone must have known the 'absolute truth' about it since they had made it and it was the first of its kind so it is the basis for all replicas.

I don't know if I'm articulating my point properly hopefully someone understands lol.

One pendulum is simple, but a brain is more like 15 billion pendulums that all affect each other. The likelihood of deriving all truths about how they affect each other from observation is low. I think that's what Negatron meant.
 

Similar threads

  • Computing and Technology
Replies
4
Views
1K
  • Computing and Technology
Replies
13
Views
2K
  • Computing and Technology
Replies
6
Views
1K
  • Computing and Technology
2
Replies
35
Views
4K
  • Computing and Technology
Replies
27
Views
958
  • Computing and Technology
Replies
12
Views
3K
  • General Discussion
Replies
10
Views
817
Replies
9
Views
1K
  • Computing and Technology
Replies
7
Views
2K
Replies
11
Views
2K
Back
Top