- #36
Sorry!
- 418
- 0
Meh, this is why I originally posted this in the philosophy forum. Everyone here will just pressume we're speaking of the laptop I'm typing on. Or just not go to the depth that I was questioning to...
Yes, this was the purpose of my posting this; to gain insight into these areas of thought.AUMathTutor said:I think the real problem is that there's no good definition of intelligence. Until we have that, all of this is just intangible, conjectural, emotional supposition.
We need a definition, a tangible definition, whereby we can point at something, do some sort of test, and measure the level of intelligence.
Perhaps an IQ test? But surely it would be fairly straightforward to design a computer program that would perform well on IQ tests.
Does anyone have any ideas for what might constitute a list of criteria for intelligence? What about this:
(1.) The entity can and does receive external stimuli and produce verifiable responses to these stimuli in such a way that the responses depend somewhat upon the stimuli.
(2.) The entity has some capacity for memory and recall; that is, responses to stimuli can draw on previous stimuli and previous responses. Such memory and recall must be evident, inasmuch as it must manifest itself.
(3.) The entity can adapt its memory by addition, modification, and deletion, to some satisfactory level; the capacity for addition need not be infinite but should be large, the capacity for modification need not be complete but should be broad, and the capacity for deletion need not be perfect but should be practical. The updating of memory should be observable per se or via responses to stimuli.
(4.) The entity's memory should include proportionally more information from stimuli than was presented to the entity ab initio.
Ideas? Thoughts?
This is the famous Turing test. The problem with it is the onus it places on the tester to ask the right questions. Certainly if I ask the computer and the human to keep quiet, I won't know if the computer is intelligent or just unplugged.Sorry! said:If the person can not tell which is the computer and which is the human then the computer must be thinking and be intelligent.
The reasoning here is too general. You can't enslave a human-meat burger either, and it can be evaluated that the burger has no conscious persona, even though it's biological consistency is comparable to that of a full blown person.zoobyshoe said:In the matter of computers my reasoning is that they are not conscious so the issue of enslavement is absurd. You can no more "enslave" a computer than you could "set it free".
O.K. Thanks.Negatron said:You could argue that a computer cannot attain a comparable level of functional sophistication and thus is not subject to an equivalent form of morality, however there would be no argument if this were known to be true. The entire question is, IF a computer could do this, would that computer be entitled to an equivalent level of personhood? The answer in such circumstances is, without any rational form of opposition, clearly yes.
You seem to be condescending over an issue that is entirely hypothetical for the foreseeable future. While I agree that some day it will very likely be an issue about where we draw the line, it isn't today. We can only guess what the issues may be; it will require an actual example whereby we can make a concrete decision.Negatron said:See, this is my point about intangible reasoning. You can throw around the term "free will" and enslave or not enslave creatures of any kind as you see fit.
Sound logic doesn't hamper my enslavement of Puerto-Ricans much as it doesn't hamper your intention of enslavement of creatures of any another form, to which you apply a convenient label to appease your arbitrary qualifications.
You're fortunate enough to equate all -biological- humans, however those that do not have an explanation no worse than your own. I for one am convinced I have a soul and midgets do not. I don't care to evaluate what the presence of a soul would imply however, this is an unnecessary inconvenience to my suppositions.
Perhaps you should define some objective measures by which something qualifies for the right of freedom, which can be empirically evaluated, rather than rely on a poor philosophical dichotomy of no quantitative merit.
I like your face, ergo, you have free will, please move to the right.
You on the other hand have too much silicon in your cognitive hardware therefore have no free will and are not subject to the slightest bit of decency. Please move to the left and jump right into the fire pit.
This is interesting. Can you give me some examples of these systems? I have never heard it asserted that we don't understand any systems precisely. (I'm not doubting it, just want a better grasp of what that assertion means.) Would a swinging pendulum constitute a "system", for example?Negatron said:To simplify, we can build it, but we will nevertheless not know precisely how it works. However there are proofs that this is true of any system we observe, not just neural networks, so incomplete understanding should not be a shocker, and as far as the scientific establishment has shown, is not a significant obstacle for progress.
Gödel has been mentioned. The idea stems from this but can be generalized to mere observation and is quite intuitive, so I suspect that certain practical relationships to such understanding have been apparent for the longest time.zoobyshoe said:This is interesting. Can you give me some examples of these systems? I have never heard it asserted that we don't understand any systems precisely. (I'm not doubting it, just want a better grasp of what that assertion means.) Would a swinging pendulum constitute a "system", for example?
Negatron said:Gödel has been mentioned. The idea stems from this but can be generalized to mere observation and is quite intuitive, so I suspect that certain practical relationships to such understanding have been apparent for the longest time.
http://en.wikipedia.org/wiki/Undecidable_problem
There are certain theories that relate more specifically to simulations of systems and even simulations of simulations, where an observer cannot exactly replicate (or in other words derive all known truths) from a certain system, even though they could get arbitrarily close with exponentially increasing effort. Logarithmic level of completeness perhaps.
For example, it's not particularly hard to make a computer game, but to replicate one exactly by observation, you will find it an unattainable challenge. This is more true of neural networks as a result of their vastly more complex behavior, but again, the entire point is that none of this stops us from making them to any desired specification. Actually, all these theories only seem to suggest that apparent complexity is deceptive, and that emergent complexity in nature is a result of trivially simple rules, even though their behavior in all possible circumstances cannot be completely described.
Sorry! said:Well to me this just means we can't replicate a system perfectly by just viewing it. If however as in an earlier post we had the original pendulum, which is a system. The someone must have known the 'absolute truth' about it since they had made it and it was the first of its kind so it is the basis for all replicas.
I don't know if I'm articulating my point properly hopefully someone understands lol.