SSequence
- 565
- 99
Well, for it's worth, errors themselves can be handled with various schemes. Now before I describe those, first note that you could say that if you allow even just allow one error for each input value then you could solve the halt function. That's true, but I think you would run into problems very fast for harder problems.stevendaryl said:I greatly admire Penrose as a brilliant thinker, but I actually don't think that his thoughts about consciousness, or human versus machine, are particularly novel or insightful. My feeling is that human thinking is actually not even Turing-complete. Because our memories are fuzzy and error-prone, I think that there are only finitely many different situations that we can hold in our heads. It's an enormous number, but I think it's still finite. Any kind of "insight" about finitely many situations is computable. Every function on a finite domain is computable; to go beyond what's computable, the domain must be infinite, and I just don't think that humans can really figure out problems with an arbitrarily large number of parameters.
Here are the schemes:
(1) Have an upper bound for number of errors (for any given input value) in terms of a recursive function.
(2) Ask the person who is supposedly calculating the function himself to write down the maximum number of errors (in unambiguous decimal form) in advance. However, that must be done before any output value (for the given input) has been written at all by that person.
Now in (1), if you were very skeptical, you could argue that one could make an "error" while calculating the "error function" :P.
=====
My feeling (having an intuitionistic mindset -- while making no claim to specific expertise) is that the issue is of "cognitive reliability" or "cognitive acceptability". Note that much of this is mathematical point of view (and perhaps somewhat philosophical). I don't make any comment on any other issues.
Here is a simple thought experiment (no limitations of limited lifetime, memory, workspace or time etc. assumed). You (person B) give someone (person A) a pen and paper (the paper being as big as they want it to be), and ask them to take as much time as they want to (with no restrictions whatsoever) and write down a number (in simple standard decimal form). That is ANY number, taking as much time/work-space as they wish. When that person is done writing it they hand it to you to read off the number. Let's write this value as A(0).
Similarly this process is repeated over and over and we keep getting values A(1), A(2), ... and so on etc.
Assuming standard currently held assumptions, person A can't give a guarantee that the values that will be written will dominate a certain function (called the "busy one"). And that's my point. The person A has no "real" or "acceptable" way to see when he has crossed a certain threshold***. The issue hardly is that he is pressed for time or work-space or any such things (in any way whatsoever). He just simply can't tell what exactly he has to do. Well he can tell what to do, but he just simply can't break it down further down to cognitively understandable/visualisable bits.
Currently, for example, mathematicians know the first 10 or so values of the "busy one" and if they were given infinite time, they could guarantee (mechanically) that they could probably calculate it for a dozen more values.
But what about 100 or 1000? What about 10000?
Basically we are asking for any mathematician(s) to stand ("on trial") in place of person A and beat the "busy one".
But well, someone could object and say that you should allow the person A to retract his answer (assume that the number of possible retractions is handled by scheme(2) mentioned above). But then once again in close proximity "another busy one" will be waiting.
I really doubt that this affects the real issue**** in the thought experiment any significant way (except just displace it a little further).
*** At least not in a mathematically meaningful way. Perhaps psychologically person A could claim that. But I don't think that's within the domain of maths anymore, so I will leave it here.
**** Certainly errors of judgement or mistakes are an issue. But also remember in the experiment you are also given unbounded time to clear them. Still probably "realistically" one would keep them. I am just saying that it wouldn't affect the underlying mathematical problem.
Edit: Removed a comment (possibly incorrect) that I put in haste :P.
Also I didn't describe my own views, as I thought it would get a bit off-topic.
Last edited: