moving finger said:
Can you explain how it is (what is the mechanism whereby) the agent can acquire this “certain knowledge” that these options will in fact be available (as opposed to it simply BELIEVING that they will be available)?
Paul Martin said:
No. And after thinking more carefully, I should amend my example by saying, "The certain knowledge I have that[, barring any malfunction of the PNS of Paul Martin (PNSPM),] I can continue typing this response...".
Here you agree that a malfunction of the PNSPM could render the option unavailable to the agent, hence to ensure that the agent’s knowledge is infallible you need to add the constraint that there will be no malfunction of the PNSPM. Correct?
But (by the same reasoning that the agent cannot have infallible foreknowledge that an option will be avilable), the agent cannot have infallible foreknowledge that there will be no malfunction of the PNSPM.
In other words, the agent cannot be sure that the option will be available, because the agent cannot be sure that the PNSPM will not malfunction.
With your amended example you have simply replaced “uncertainty that the option will be available” with “uncertainty that the PNSPM will not malfunction”. The former is conditional upon the latter. The uncertainty (the fallibility) is still there.
Conclusion : The agent cannot have infallible foreknowledge.
Paul Martin said:
As for the mechanism, it is probably similar to the mechanism used to acquire the certain knowledge in the agent, when, working through PNSPM, the agent knows what green looks like as reported to the agent via the sensory and perceptive mechanisms of PNSPM.
With respect, I suggest you are trying to compare different types of knowledge.
Knowledge of “what green looks like” is not foreknowledge, it is acquired knowledge. With all due respect to Nagel, IMHO an agent cannot “know” what green looks like unless and until it has experienced seeing the colour green. Once it has had this experience, then it also has acquired the knowledge of “what green looks like”.
It should be self-evident that the agent cannot use this particular experiential mechanism to acquire such “knowledge” about future options (ie about the possibility that a particular “option” exists that has not yet “happened”).
The question as to how your agent might acquire such foreknowledge thus remains unanswered.
BTW – to try and avoid introducing additional confusion I humbly suggest it may be better to focus our debate on discussing the nature of the “free will” of a 3rd-party “agent”, rather than discussing the free will of either PM or MF. Would you agree?
Paul Martin said:
Would you say that the agent does not have certain knowledge of what green looks like as reported by a PNS?
As per above, these (the “acquired knowledge of what green looks like” and “the foreknowledge that a future option is available to it”) are different kinds of knowledge that the agent possesses, and they should not be confused with each other.
Your definition of free will is dependent on infallible foreknowledge, it is not dependent on infallible acquired knowledge.
Paul Martin said:
In an extreme (admittedly improbable, but nevertheless possible) example, the agent could be destroyed in the next instant by an asteroid which hits its home town.
Paul Martin said:
Not in my cosmos, it couldn't. In my cosmos the agent does not live in the home town. The asteroid could wipe out the PNS -- and I have just corrected for that eventuality -- but in my view, not the agent.
I do not understand your suggestion “The asteroid could wipe out the PNS -- but in my view, not the agent.”
Are you suggesting that the agent is immortal, indestructible?
That it is impossible for the agent to be destroyed?
Are you suggesting that the agent somehow exists outside of the physical world?
Can you elaborate please?
If you are indeed suggesting that an agent must necessarily be indestructible in order to have free will, then this needs to be explicit in your necessary conditions?
However, even postulating an indestructible agent does not avoid the problem. In the extreme example that I provided, the insertion of an indestructible agent simply changes the example to :
the agent’s PNS (plus associated material body and all causal contact between the agent and the physical world) could be destroyed in the next instant by an asteroid which hits its home town. This would wipe out its ability both to continue, and to take a break and have lunch, and all other options.
(explanation : even if the agent exists somehow “outside of the physical world”, the agent only acts via the physical world – the options “to continue typing a response” and “to take a break and have lunch”, are options dependent on the agent’s interaction with the physical world, and these options would no longer be available to the agent, even if the agent was somehow existing somewhere outside of the physical world and indestructible, if the agent’s PNS, body and all other associated links with the physical world were destroyed.)
Paul Martin said:
Your statement of the issue above got me wondering, "What does it mean 'to know infallibly'?". Simply to say "the agent knows" implies infallibility by the definition of the word 'know'.
That is why I inserted the word infallibly.
Because there are two interpretations of “to know” – one is the interpretation that you wish to use (which is “the agent knows infallibly”), and the other is the one I offered but which you rejected (which is “the agent believes that it knows”).
These are very different. But when most people say that they “know” something then it can mean one or the other, depending on the source of their knowledge.
When an agent says “I know that it will rain tomorrow” then it actually means “I believe that I know that it will rain tomorrow” and not “I know infallibly that it will rain tomorrow”.
The same is true (IMHO) of all foreknowledge. Infallible foreknowledge (IMHO) is not possible.
Paul Martin said:
But that's hardly convincing. Your argument would say that it is never appropriate to assert "Y knows X" for any X or Y. But that would make the word 'know' useless.
No, I did not say that no infallible knowledge is possible (but in fact it might be true that infallible knowledge is not possible). We simply need to be clear in definitions whether we are referring to infallible knowledge or not – it is important.
We are talking here specifically about foreknowledge. And IMHO infallible foreknowledge is not possible.
Paul Martin said:
But suppose the agent knows that it knows X.
Does it infallibly know that it infallibly knows X, or does it believe that it knows that it believes that it knows X? Or maybe it believes that it infallibly knows X, or maybe it infallibly knows that it believes it knows X?
Paul Martin said:
If indeed the agent knows X in the first place, knowing that it knows X in addition wouldn't strengthen the claim that it knows X.
Agreed. If the agent infallibly knows X, then that is the end of the issue.
Paul Martin said:
It would only provide additional knowledge which is outside or above the first circumstance, and which could in principle even inhere in a separate agent. We could have, for example, Agent B knows that Agent A knows X.
And we could have 4 different permutations of this based on belief and infallibility.
Paul Martin said:
This led me in three or four different directions. First is to note that you and I, in this discussion, are in that circumstance. We are questioning whether we can know that Agent A knows X. That is a different question from, "Can Agent A know X". I think it may be possible that Agent A can know X while at the same time it is impossible for Agent B to know that Agent A knows X. If that possibility turns out to be the case, then we may not be able to resolve this issue here.
And solipsism may be true. I may be the only conscious agent in the universe, and the rest of you exist in my imagination. But that leads us nowhere. We can only make sense of what is going on if we make some initial reasonable assumptions (axioms) and proceed from there.
Paul Martin said:
The second direction I am led is to extend the chain by supposing that the agent knows that it knows that it knows X. Does that help any? It seems to because now there is even more knowledge than before. What about extending the chain to a million links?
Extending the chain (IMHO) does not help. Either the agent infallibly knows X, or it does not.
Paul Martin said:
The third direction is to salt this chain with one or more 'believes': Can the agent believe it knows X?
Yes, I see no reason why an agent cannot believe anything it wishes to believe.
Paul Martin said:
Know it believes X? Know it believes it knows? Know it knows it believes? Believe it believes it knows? Etc.
Exactly.
Paul Martin said:
This is not meant to be silliness or sophistry, although it sounds like both. Instead, the point I am trying to make is that the issue you articulated is very complex.
I never thought otherwise!
Paul Martin said:
I am not prepared even to guess at the outcome of a resolution, but at this point I am willing to concede that my requirement for infallible knowledge may be unnecessarily strong. I'm not sure your proposed substitutions are the right ones either, however. Maybe it should be a longer chain of knowing and believing.
I do not see what can be gained from a longer chain. The starting point is either “the agent infallibly knows that” or “the agent believes that it knows that”, and all else (IMHO) flows from there.
Paul Martin said:
And if the person is also operating deterministically?
Paul Martin said:
The automaton was an analogy. Little is to be gained by staking much on the details of one of the analogs. But the analogy aside, you are asking about the consequences of the case where the conscious agent operates deterministically. I'd say in that case there is no free will.
Can you explain why you think your definition of free will is necessarily incompatible with determinism?
(for the record, I believe your type of free will does not exist because your definition requires infallible foreknowledge, which I do not believe is possible, in either a deterministic or an indeterministic world)
It is possible to define free will such that it is compatible with determinism.
Paul Martin said:
The point is that it provides a different hypothesis from which to work. My only suggestion is that we explore the hypothesis of a single consciousness and see where it leads. My suspicions are that it will be more fruitful than the hypothesis of "PNSx contains TEOx", or even "The physical world of PNSx contains TEOx"
I’m not sure what relevance those acronyms have to this thread. Can you explain what you mean?
Paul Martin said:
If however one takes a pragmatic approach and defines free will such that free will is possible (even though it may not provide a very satisfying or intuitively “nice” result in terms of the "feeling" of free will), then explaining how free will operates is also possible (this is my approach)
Paul Martin said:
That may be true. But unless and until you actually produce that explanation for how free will operates, the mystery remains. As of this date, I still maintain that free will is a mystery in every model.
MF definition of free will : the ability of an agent to anticipate alternate possible outcomes dependent on alternate possible courses of action, and to choose which course of action to follow and in so doing to behave in a manner such that the agent’s choice appears, both to itself and to an outside observer, to be reasoned but not consistently predictable.
I am not saying that the world is necessarily determinsitic, but I think you will find that the above definition is entirely consistent with determinism, and also consistent with the way that humans (who claim to have free will) actually behave. There is no mystery involved in this definition or in the way that free will operates. I agree the MF definition does not accord with the naïve conception of free will - but that is because the naïve conception of free will is based on unsound reasoning, and leads to a kind of free will which is not possible.
No mystery.
MF
