How can brain activity precede conscious intent?

  • Thread starter Thread starter Math Is Hard
  • Start date Start date
  • Tags Tags
    Delay
Click For Summary
Research by Benjamin Libet and Bertram Feinstein indicates a half-second delay between brain activity and conscious sensation reporting, suggesting that electrical signals related to motor tasks can occur before conscious intent to act. This raises questions about the nature of free will, as some argue that actions may be initiated unconsciously, with conscious awareness only intervening to veto actions. Critics of Libet's findings point out the complexity of distinguishing between conscious decisions and subconscious processes, questioning the reliability of measuring conscious awareness. The discussion highlights the philosophical implications of these findings, particularly regarding the relationship between consciousness and reality. Overall, the debate centers on whether free will exists if actions can precede conscious intent.
  • #121
moving finger said:
Can you explain how it is (what is the mechanism whereby) the agent can acquire this “certain knowledge” that these options will in fact be available (as opposed to it simply BELIEVING that they will be available)?
Paul Martin said:
No. And after thinking more carefully, I should amend my example by saying, "The certain knowledge I have that[, barring any malfunction of the PNS of Paul Martin (PNSPM),] I can continue typing this response...".
Here you agree that a malfunction of the PNSPM could render the option unavailable to the agent, hence to ensure that the agent’s knowledge is infallible you need to add the constraint that there will be no malfunction of the PNSPM. Correct?
But (by the same reasoning that the agent cannot have infallible foreknowledge that an option will be avilable), the agent cannot have infallible foreknowledge that there will be no malfunction of the PNSPM.
In other words, the agent cannot be sure that the option will be available, because the agent cannot be sure that the PNSPM will not malfunction.
With your amended example you have simply replaced “uncertainty that the option will be available” with “uncertainty that the PNSPM will not malfunction”. The former is conditional upon the latter. The uncertainty (the fallibility) is still there.
Conclusion : The agent cannot have infallible foreknowledge.

Paul Martin said:
As for the mechanism, it is probably similar to the mechanism used to acquire the certain knowledge in the agent, when, working through PNSPM, the agent knows what green looks like as reported to the agent via the sensory and perceptive mechanisms of PNSPM.
With respect, I suggest you are trying to compare different types of knowledge.
Knowledge of “what green looks like” is not foreknowledge, it is acquired knowledge. With all due respect to Nagel, IMHO an agent cannot “know” what green looks like unless and until it has experienced seeing the colour green. Once it has had this experience, then it also has acquired the knowledge of “what green looks like”.
It should be self-evident that the agent cannot use this particular experiential mechanism to acquire such “knowledge” about future options (ie about the possibility that a particular “option” exists that has not yet “happened”).
The question as to how your agent might acquire such foreknowledge thus remains unanswered.

BTW – to try and avoid introducing additional confusion I humbly suggest it may be better to focus our debate on discussing the nature of the “free will” of a 3rd-party “agent”, rather than discussing the free will of either PM or MF. Would you agree?

Paul Martin said:
Would you say that the agent does not have certain knowledge of what green looks like as reported by a PNS?
As per above, these (the “acquired knowledge of what green looks like” and “the foreknowledge that a future option is available to it”) are different kinds of knowledge that the agent possesses, and they should not be confused with each other.
Your definition of free will is dependent on infallible foreknowledge, it is not dependent on infallible acquired knowledge.

Paul Martin said:
In an extreme (admittedly improbable, but nevertheless possible) example, the agent could be destroyed in the next instant by an asteroid which hits its home town.
Paul Martin said:
Not in my cosmos, it couldn't. In my cosmos the agent does not live in the home town. The asteroid could wipe out the PNS -- and I have just corrected for that eventuality -- but in my view, not the agent.
I do not understand your suggestion “The asteroid could wipe out the PNS -- but in my view, not the agent.”
Are you suggesting that the agent is immortal, indestructible?
That it is impossible for the agent to be destroyed?
Are you suggesting that the agent somehow exists outside of the physical world?
Can you elaborate please?

If you are indeed suggesting that an agent must necessarily be indestructible in order to have free will, then this needs to be explicit in your necessary conditions?

However, even postulating an indestructible agent does not avoid the problem. In the extreme example that I provided, the insertion of an indestructible agent simply changes the example to :

the agent’s PNS (plus associated material body and all causal contact between the agent and the physical world) could be destroyed in the next instant by an asteroid which hits its home town. This would wipe out its ability both to continue, and to take a break and have lunch, and all other options.

(explanation : even if the agent exists somehow “outside of the physical world”, the agent only acts via the physical world – the options “to continue typing a response” and “to take a break and have lunch”, are options dependent on the agent’s interaction with the physical world, and these options would no longer be available to the agent, even if the agent was somehow existing somewhere outside of the physical world and indestructible, if the agent’s PNS, body and all other associated links with the physical world were destroyed.)

Paul Martin said:
Your statement of the issue above got me wondering, "What does it mean 'to know infallibly'?". Simply to say "the agent knows" implies infallibility by the definition of the word 'know'.
That is why I inserted the word infallibly.
Because there are two interpretations of “to know” – one is the interpretation that you wish to use (which is “the agent knows infallibly”), and the other is the one I offered but which you rejected (which is “the agent believes that it knows”).
These are very different. But when most people say that they “know” something then it can mean one or the other, depending on the source of their knowledge.
When an agent says “I know that it will rain tomorrow” then it actually means “I believe that I know that it will rain tomorrow” and not “I know infallibly that it will rain tomorrow”.
The same is true (IMHO) of all foreknowledge. Infallible foreknowledge (IMHO) is not possible.

Paul Martin said:
But that's hardly convincing. Your argument would say that it is never appropriate to assert "Y knows X" for any X or Y. But that would make the word 'know' useless.
No, I did not say that no infallible knowledge is possible (but in fact it might be true that infallible knowledge is not possible). We simply need to be clear in definitions whether we are referring to infallible knowledge or not – it is important.
We are talking here specifically about foreknowledge. And IMHO infallible foreknowledge is not possible.

Paul Martin said:
But suppose the agent knows that it knows X.
Does it infallibly know that it infallibly knows X, or does it believe that it knows that it believes that it knows X? Or maybe it believes that it infallibly knows X, or maybe it infallibly knows that it believes it knows X?

Paul Martin said:
If indeed the agent knows X in the first place, knowing that it knows X in addition wouldn't strengthen the claim that it knows X.
Agreed. If the agent infallibly knows X, then that is the end of the issue.

Paul Martin said:
It would only provide additional knowledge which is outside or above the first circumstance, and which could in principle even inhere in a separate agent. We could have, for example, Agent B knows that Agent A knows X.
And we could have 4 different permutations of this based on belief and infallibility.

Paul Martin said:
This led me in three or four different directions. First is to note that you and I, in this discussion, are in that circumstance. We are questioning whether we can know that Agent A knows X. That is a different question from, "Can Agent A know X". I think it may be possible that Agent A can know X while at the same time it is impossible for Agent B to know that Agent A knows X. If that possibility turns out to be the case, then we may not be able to resolve this issue here.
And solipsism may be true. I may be the only conscious agent in the universe, and the rest of you exist in my imagination. But that leads us nowhere. We can only make sense of what is going on if we make some initial reasonable assumptions (axioms) and proceed from there.

Paul Martin said:
The second direction I am led is to extend the chain by supposing that the agent knows that it knows that it knows X. Does that help any? It seems to because now there is even more knowledge than before. What about extending the chain to a million links?
Extending the chain (IMHO) does not help. Either the agent infallibly knows X, or it does not.

Paul Martin said:
The third direction is to salt this chain with one or more 'believes': Can the agent believe it knows X?
Yes, I see no reason why an agent cannot believe anything it wishes to believe.

Paul Martin said:
Know it believes X? Know it believes it knows? Know it knows it believes? Believe it believes it knows? Etc.
Exactly.

Paul Martin said:
This is not meant to be silliness or sophistry, although it sounds like both. Instead, the point I am trying to make is that the issue you articulated is very complex.
I never thought otherwise!

Paul Martin said:
I am not prepared even to guess at the outcome of a resolution, but at this point I am willing to concede that my requirement for infallible knowledge may be unnecessarily strong. I'm not sure your proposed substitutions are the right ones either, however. Maybe it should be a longer chain of knowing and believing.
I do not see what can be gained from a longer chain. The starting point is either “the agent infallibly knows that” or “the agent believes that it knows that”, and all else (IMHO) flows from there.

Paul Martin said:
And if the person is also operating deterministically?
Paul Martin said:
The automaton was an analogy. Little is to be gained by staking much on the details of one of the analogs. But the analogy aside, you are asking about the consequences of the case where the conscious agent operates deterministically. I'd say in that case there is no free will.
Can you explain why you think your definition of free will is necessarily incompatible with determinism?

(for the record, I believe your type of free will does not exist because your definition requires infallible foreknowledge, which I do not believe is possible, in either a deterministic or an indeterministic world)

It is possible to define free will such that it is compatible with determinism.

Paul Martin said:
The point is that it provides a different hypothesis from which to work. My only suggestion is that we explore the hypothesis of a single consciousness and see where it leads. My suspicions are that it will be more fruitful than the hypothesis of "PNSx contains TEOx", or even "The physical world of PNSx contains TEOx"
I’m not sure what relevance those acronyms have to this thread. Can you explain what you mean?

Paul Martin said:
If however one takes a pragmatic approach and defines free will such that free will is possible (even though it may not provide a very satisfying or intuitively “nice” result in terms of the "feeling" of free will), then explaining how free will operates is also possible (this is my approach)
Paul Martin said:
That may be true. But unless and until you actually produce that explanation for how free will operates, the mystery remains. As of this date, I still maintain that free will is a mystery in every model.

MF definition of free will : the ability of an agent to anticipate alternate possible outcomes dependent on alternate possible courses of action, and to choose which course of action to follow and in so doing to behave in a manner such that the agent’s choice appears, both to itself and to an outside observer, to be reasoned but not consistently predictable.

I am not saying that the world is necessarily determinsitic, but I think you will find that the above definition is entirely consistent with determinism, and also consistent with the way that humans (who claim to have free will) actually behave. There is no mystery involved in this definition or in the way that free will operates. I agree the MF definition does not accord with the naïve conception of free will - but that is because the naïve conception of free will is based on unsound reasoning, and leads to a kind of free will which is not possible.

No mystery.

MF
:smile:
 
Physics news on Phys.org
  • #122
moving finger said:
This is why I asked you to give an example of how your “randomness” is supposed to endow an otherwise deterministic agent with “free will”. You have not given such an example (I suspect because you cannot give one).

It is true that we would not consider an individual to 'own' a
an action or decision if it had nothing to do with his beliefs
and aims at the time he made it -- that is, if we assume
that indeterminism erupts in-between everything that happened
to make him the individual he is, and the act itself.

I call this the Burridan's Ass model, in which the only
useful role indeterminism can have is as a 'casting vote'
when there are no strong preferences one way or the other.
An alternative is the Darwinian model, according to which
an indeterministic process plays a role analogous to random
mutation , in that it throws up ideas and potential solutions
to problems which another, more rational and deterministic process
selects between. This role of indeterminism places it where
it can do least harm to rationality; it is only called on
where creativity and imagination are required, and it does
not get translated into action without being being subject to a
rational veto. This answers the common charge that indeterminism
would lead to capricious behaviour in all circumstances,
which is equivalent to saying that Darwinian evolution would be
'just random' and unable to explain the orderliness of the natural
world. Both objections look at only the random process in isolation.
 
  • #123
Tournesol said:
It's to do with the ability to have done otherwise .
Can you clarify please just what you mean by "the ability to have done otherwise"?

Thank you

MF
:smile:
 
  • #124
moving finger said:
With respect, I suggest you are trying to compare different types of knowledge.
Knowledge of “what green looks like” is not foreknowledge, it is acquired knowledge.
Hmmmmm.

moving finger said:
BTW – to try and avoid introducing additional confusion I humbly suggest it may be better to focus our debate on discussing the nature of the “free will” of a 3rd-party “agent”, rather than discussing the free will of either PM or MF. Would you agree?
Yes, I agree. I think I have done that.

moving finger said:
Your definition of free will is dependent on infallible foreknowledge, it is not dependent on infallible acquired knowledge.
Hmmmmmm. It does appear that way.

moving finger said:
I do not understand your suggestion “The asteroid could wipe out the PNS -- but in my view, not the agent.”
The asteroid is in the physical world; the agent is not. Thus the agent is immune from the asteroid.

moving finger said:
Are you suggesting that the agent is immortal, indestructible?
That it is impossible for the agent to be destroyed?
No. Just not by an asteroid.

moving finger said:
If you are indeed suggesting that an agent must necessarily be indestructible in order to have free will, then this needs to be explicit in your necessary conditions?
No. It's just that as soon as the agent is destroyed, it no longer has free will.

moving finger said:
Are you suggesting that the agent somehow exists outside of the physical world?
Yes, absolutely. That is one of the most significant assumptions in my view of the world. It is probably second only to my assumption of the existence of only a single consciousness, since I think "a single consciousness" implies a non-physical world.

moving finger said:
Can you elaborate please?
Yes. I'd be delighted to do so. Thank you for asking.

I don't think it would be appropriate to go into elaborate detail here so I will give you some references and then address what I think you might be getting at by asking.

If you read my recent posts to other threads in this forum, virtually all of them express some notion or other of my world view. You can also check out my essays at http://www.paulandellen.com/essays/essays.htm and if you only want to read one, start with my "World-view 2004".

Now I suspect that what you are asking about is, Where, for heaven's sake, is that other "place" which is outside the physical world? In my view it is in manifolds in higher dimensional space/time which are separate from our 4D manifold and which have more than 4 dimensions. This, I know, I know, has been a very contentious idea since it was proposed to Einstein by Kaluza, and I know that it is falling out of favor today, but I have yet to hear any argument sufficient to dismiss it IMHO.

For more elaboration, I'll let you prompt me with questions or comments.

moving finger said:
We are talking here specifically about foreknowledge. And IMHO infallible foreknowledge is not possible.
I see your point.

moving finger said:
Can you explain why you think your definition of free will is necessarily incompatible with determinism?
I tried to explain that with my thought experiment of re-running identical circumstances and getting different results. Determinism would say that the results would be identical.

moving finger said:
(for the record, I believe your type of free will does not exist because your definition requires infallible foreknowledge, which I do not believe is possible, in either a deterministic or an indeterministic world)
Your points about foreknowledge are beginning to sink in.

moving finger said:
MF definition of free will : the ability of an agent to anticipate alternate possible outcomes dependent on alternate possible courses of action, and to choose which course of action to follow and in so doing to behave in a manner such that the agent’s choice appears, both to itself and to an outside observer, to be reasoned but not consistently predictable.
Except for a small quibble, I find this definition to make sense and I would accept it.

moving finger said:
I am not saying that the world is necessarily determinsitic, but I think you will find that the above definition is entirely consistent with determinism, and also consistent with the way that humans (who claim to have free will) actually behave.
Yes, you are right. I did find it consistent with determinism and with human behavior.

moving finger said:
There is no mystery involved in this definition or in the way that free will operates.
OK, but there still remains the slightly nagging question of whether or not there is free will in the naïve sense. (Could I really have taken a lunch break? I just don't know.)

moving finger said:
I agree the MF definition does not accord with the naïve conception of free will - but that is because the naïve conception of free will is based on unsound reasoning, and leads to a kind of free will which is not possible.
That could very well be the reason.

moving finger said:
No mystery.
Not one worth debating anyway.

Thank you for the insights.

Paul
 
  • #125
Tournesol said:
An alternative is the Darwinian model, according to which
an indeterministic process plays a role analogous to random
mutation , in that it throws up ideas and potential solutions
to problems which another, more rational and deterministic process
selects between. This role of indeterminism places it where
it can do least harm to rationality; it is only called on
where creativity and imagination are required, and it does
not get translated into action without being being subject to a
rational veto.
Calling this a Darwinian model is IMHO (and with respect) a little insulting to Charles Darwin, and lends the mechanism suggested above a little too much scientific credibility. The processes underlying the evolution of species are completely compatible with determinism, the so-called “random mutations” need not in fact be due to any ontically indeterministic process. Out of respect to Mr Darwin I suggest the mechanism suggested above be re-named the Random Alternatives (RA) mechanism.

If I understand this RA mechanism correctly, the source of indeterminism is postulated to be introduced prior to the agent’s point of decision (prior to the agent’s moment of choice), and the agent’s choice is still intended to be a deterministic process? Indeterminism is supposed to “generate” a series of random alternative courses of action (much like a random number generator or RNG in a computer) for the agent to consider and from which to choose.

Thus, if we could “re-play” a particular choice that the agent had already made, keeping everything as it was before but allowing the RNG to generate different alternatives, then we may find that the agent “apparently” chooses differently in each re-play, depending upon the random alternative courses of action that are generated by the RNG. This “apparently” different choice by the agent in each re-play is then supposed to be a reflection of the agent’s “free will”.

In fact, if we re-play a particular choice that the agent has made, keeping everything as it was before but allowing the RNG to generate alternative courses of action apparently randomly, then we necessarily must observe one of two alternative scenarios :

EITHER (A) the RNG happens (probabilistically) to generate the same alternatives on the second “run”, in which case the agent (operating deterministically) will necessarily make the same choice as on the first run. In other words, if we could re-play the agent’s moment of choice with all of the conditions exactly as they were before including the alternatives that are generated for the agent to consider, then the agent will necessarily make the same choice as it did before. This is a completely deterministic scenario and is completely compatible with determinism (ie re-play with the same starting conditions and one obtains the same result).

OR (B) the RNG generates different alternatives on the second “run”, in which case the agent (still operating determinsitically) might make a choice which is different to the choice that it made on the first run. In other words, if we could re-play the agent’s moment of choice with all of the conditions exactly as they were before EXCEPT that the alternatives for consideration are different, then the agent will not necessarily make the same choice as it did before. This (the agent’s choice) again is a completely deterministic scenario and is again completely compatible with determinism (ie re-play with different starting conditions and one may obtain a different result).

The only difference between re-play (A) and re-play (B) is that in (A) the conditions are indeed set to the way they were the first time round, whereas in (B) the conditions (at the moment of choice of the agent) are not the same as they were before. THIS FACT ALONE (and not any supposed “free will” on the part of the agent) is the source of the agent’s ability to make different choices in each run.

In fact, we do not need the RNG in the proposed mechanism to be ontically indeterministic. It need only be an RNG in the sense of a computer software RNG, which operates to generate epistemically random, but ontically deterministic, numbers. What matters in the RA mechansim (the “apparent source of free will”) is ONLY that the agent is provided with DIFFERENT ALTERNATIVES in each re-play (this will ensure that the agent will not necessarily make the same choice in each re-play, scenario B above), and NOT that these alternatives are generated by a genuinely (ontically) indeterministic process.

To show how \"silly\" this notion of random generation of "free will" is, consider the following :

The Libertarian Free Will Computer

I could quite easily \"build\" such models of \"free-will\" agents using computer software, incorporating an RNG to \"generate\" apparently random alternatives for my deterministic software agent to consider, and from which to choose. Since I am generating the computer agent\'s alternatives randomly (thus ensuring that it\'s choice need not be the same each time) does that mean my computer agent now has \"free will\", where it had no \"free will\" before (prior to me introducing the RNG)? I think everyone would agree that this notion is very silly. And does it make any difference if the RNG is genuinely random (ontically indeterministic), or whether it simply appears to be random (epistemically indeterminable)? No, of course not. It does not matter what we do with the RNG, we cannot use indeterminsim to \"endow\" the Libertarian version of free will onto an otherwise deterministic machine

I think one will find that if one models the above RA mechanism and examines it rationally and logically, looking at the possible sequences generated, then one will find that the introduction of the RNG prior to the moment of choice acts in much the same way as introducing the RNG after the moment of choice. In both cases, there is a point at which a deterministic choice is made by the agent based on alternatives available, but in both cases the final result is in fact random. This is not free will. This is simply a random-choice-making mechanism.

MF
:smile:
 
Last edited:
  • #126
moving finger said:
OR (B) the RNG generates different alternatives on the second “run”, in which case the agent (still operating determinsitically) might make a choice which is different to the choice that it made on the first run. In other words, if we could re-play the agent’s moment of choice with all of the conditions exactly as they were before EXCEPT that the alternatives for consideration are different, then the agent will not necessarily make the same choice as it did before. This (the agent’s choice) again is a completely deterministic scenario and is again completely compatible with determinism (ie re-play with different starting conditions and one may obtain a different result).

No, this isn't completely deterministic , because determinism requires
a rigid chain of cause and effect going back to the year dot. One part
of the process, the selection from options may be deterministic, but
the other part, the generation of options to be selected from, isn't.
D-ism doesn't mean that cause cause effects every now and then,
it means everyhting happens with iron necessity and no exceptions.
The only difference between re-play (A) and re-play (B) is that in (A) the conditions are indeed set to the way they were the first time round, whereas in (B) the conditions (at the moment of choice of the agent) are not the same as they were before. THIS FACT ALONE (and not any supposed “free will” on the part of the agent) is the source of the agent’s ability to make different choices in each run.

But they are different becuase of indeterminism in the chain
of causes leading up to that moment, and in my naturalistic
account of FW, that indeterminism is one of the things that constitutes
FW. You seem to be assuming that FW is supernatural or nothing;
I am not making that assumption.

In fact, we do not need the RNG in the proposed mechanism to be ontically indeterministic. It need only be an RNG in the sense of a computer software RNG, which operates to generate epistemically random, but ontically deterministic, numbers. What matters in the RA mechansim (the “apparent source of free will”) is ONLY that the agent is provided with DIFFERENT ALTERNATIVES in each re-play (this will ensure that the agent will not necessarily make the same choice in each re-play, scenario B above), and NOT that these alternatives are generated by a genuinely (ontically) indeterministic process.


Pseudo-random numbers (which are really deterministic)
may be used in computers, and any indeterminism the brain
calls on might be only pseudo-random. But it does not have
to be, and if we assume it is not, we can explain realistically
why we have the sense of being able to have done otherwise.
People sometimes try to explain this as an 'illusion', but
it do not make it clear why we would have that particular illusion.


I could quite easily \"build\" such models of \"free-will\" agents using computer software, incorporating an RNG to \"generate\" apparently random alternatives for my deterministic software agent to consider, and from which to choose. Since I am generating the computer agent\'s alternatives randomly (thus ensuring that it\'s choice need not be the same each time) does that mean my computer agent now has \"free will\", where it had no \"free will\" before (prior to me introducing the RNG)? I think everyone would agree that this notion is very silly. And does it make any difference if the RNG is genuinely random (ontically indeterministic), or whether it simply appears to be random (epistemically indeterminable)? No, of course not. It does not matter what we do with the RNG, we cannot use indeterminsim to \"endow\" the Libertarian version of free will onto an otherwise deterministic machine

Naturalists think it is not impossible to artificially duplicate human
mentality, which would have to include human volition, since there
is not 'ghost' in the human machine. You are levelling down, saying huamns have
no FW and computers don't either. I am levelling up, saying humans have FW and appropriate computers could have it as well. It all depends on what
you mean by FW. The contentious issue, vis a vis determinism, is the
ability to have doen otherwise, and that is explainable naturalistically in an indeterministic universe.

I think one will find that if one models the above RA mechanism and examines it rationally and logically, looking at the possible sequences generated, then one will find that the introduction of the RNG prior to the moment of choice acts in much the same way as introducing the RNG after the moment of choice. In both cases, there is a point at which a deterministic choice is made by the agent based on alternatives available, but in both cases the final result is in fact random. This is not free will. This is simply a random-choice-making mechanism.

No it isn't the same. Intorducing randomness after choice removes 'ownership'.
The hypothetical AI wouldn't be able to explain why it did as it did.
 
  • #127
moving finger said:
OR (B) the RNG generates different alternatives on the second “run”, in which case the agent (still operating determinsitically) might make a choice which is different to the choice that it made on the first run. In other words, if we could re-play the agent’s moment of choice with all of the conditions exactly as they were before EXCEPT that the alternatives for consideration are different, then the agent will not necessarily make the same choice as it did before. This (the agent’s choice) again is a completely deterministic scenario and is again completely compatible with determinism (ie re-play with different starting conditions and one may obtain a different result) .
Tournesol said:
No, this isn't completely deterministic , because determinism requires a rigid chain of cause and effect going back to the year dot.
Please read what I wrote.
“re-play with different starting conditions and one may obtain a different result”
This is completely deterministic.

Tournesol said:
One part of the process, the selection from options may be deterministic, but the other part, the generation of options to be selected from, isn't. D-ism.
Agreed - this is the point of “indeterminism”. But introducing indeterminism into the process simply introcduces indeterminism into the results – how do you think it introduced free will?

moving finger said:
The only difference between re-play (A) and re-play (B) is that in (A) the conditions are indeed set to the way they were the first time round, whereas in (B) the conditions (at the moment of choice of the agent) are not the same as they were before. THIS FACT ALONE (and not any supposed “free will” on the part of the agent) is the source of the agent’s ability to make different choices in each run.
Tournesol said:
But they are different becuase of indeterminism in the chain of causes leading up to that moment, and in my naturalistic account of FW, that indeterminism is one of the things that constitutes FW. You seem to be assuming that FW is supernatural or nothing.
No, you seem to be assuming that introducing indeterminism also introduces free will.

Tournesol said:
Pseudo-random numbers (which are really deterministic) may be used in computers, and any indeterminism the brain calls on might be only pseudo-random. But it does not have to be, and if we assume it is not, we can explain realistically why we have the sense of being able to have done otherwise. People sometimes try to explain this as an 'illusion', but it do not make it clear why we would have that particular illusion.
You have not explained anything. You have assumed that indeterminism is equivalent to free will simply because indeterminism results in an indeterministic outcome.

Tournesol said:
Naturalists think it is not impossible to artificially duplicate human mentality, which would have to include human volition, since there is not 'ghost' in the human machine. You are levelling down, saying huamns have no FW and computers don't either.
I am not saying that humans do not have free will, I am saying that free will as defined by you cannot exist, period.

Tournesol said:
I am levelling up, saying humans have FW and appropriate computers could have it as well. It all depends on what you mean by FW.
Do you agree that the computer I have just described has free will? The computer “could have done otherwise” since it’s choices were dependent on a RNG input – therefore according to your philosophy it must have free will? Yes? Or no? And if no then why not?

moving finger said:
I think one will find that if one models the above RA mechanism and examines it rationally and logically, looking at the possible sequences generated, then one will find that the introduction of the RNG prior to the moment of choice acts in much the same way as introducing the RNG after the moment of choice. In both cases, there is a point at which a deterministic choice is made by the agent based on alternatives available, but in both cases the final result is in fact random. This is not free will. This is simply a random-choice-making mechanism.
Tournesol said:
No it isn't the same. Intorducing randomness after choice removes 'ownership'. The hypothetical AI wouldn't be able to explain why it did as it did.
Incorrect. Why do you think the AI would not be able to explain why it did as it did? It operates deterministically, there is no reason why it should not understand the reason for its choices…..

MF

:smile:
 
  • #128
moving finger said:
With respect, I suggest you are trying to compare different types of knowledge.
Knowledge of “what green looks like” is not foreknowledge, it is acquired knowledge.
Paul Martin said:
Hmmmmm.
Hmmmmm? Is that a yes or a no?

moving finger said:
Your definition of free will is dependent on infallible foreknowledge, it is not dependent on infallible acquired knowledge.
Paul Martin said:
Hmmmmmm. It does appear that way.
Thank you.

moving finger said:
I do not understand your suggestion “The asteroid could wipe out the PNS -- but in my view, not the agent.”
Paul Martin said:
The asteroid is in the physical world; the agent is not. Thus the agent is immune from the asteroid.
Interesting. Can you please define what else your “agent” is also immune to? The common cold?

moving finger said:
Are you suggesting that the agent is immortal, indestructible?
That it is impossible for the agent to be destroyed?
Paul Martin said:
No. Just not by an asteroid.
“Just” by an asteroid? Thus, your agent can be destroyed by absolutely anything else…… but not by an asteroid?
Really?
Strange.

moving finger said:
If you are indeed suggesting that an agent must necessarily be indestructible in order to have free will, then this needs to be explicit in your necessary conditions?
Paul Martin said:
No. It's just that as soon as the agent is destroyed, it no longer has free will.
Well that does seem logical. You are not suggesting that your agent is necessarily indestructible then.

moving finger said:
Are you suggesting that the agent somehow exists outside of the physical world?
Paul Martin said:
Yes, absolutely. That is one of the most significant assumptions in my view of the world. It is probably second only to my assumption of the existence of only a single consciousness, since I think "a single consciousness" implies a non-physical world.
Perhaps you should therefore include this in your “necessary conditions” for free will?

Paul Martin said:
For more elaboration, I'll let you prompt me with questions or comments.
OK, maybe later.

moving finger said:
We are talking here specifically about foreknowledge. And IMHO infallible foreknowledge is not possible.
Paul Martin said:
I see your point.
Thank you. Does that mean you “agree”?

moving finger said:
Can you explain why you think your definition of free will is necessarily incompatible with determinism?
Paul Martin said:
I tried to explain that with my thought experiment of re-running identical circumstances and getting different results. Determinism would say that the results would be identical.
Sorry, I still don’t understand how you introduce “different results”, unless this is purely due to indeterminism? (but if it is indeterminism, then what has this to do with free will?)

moving finger said:
(for the record, I believe your type of free will does not exist because your definition requires infallible foreknowledge, which I do not believe is possible, in either a deterministic or an indeterministic world)
Paul Martin said:
Your points about foreknowledge are beginning to sink in.
Sink in? Does this mean you agree?

moving finger said:
MF definition of free will : the ability of an agent to anticipate alternate possible outcomes dependent on alternate possible courses of action, and to choose which course of action to follow and in so doing to behave in a manner such that the agent’s choice appears, both to itself and to an outside observer, to be reasoned but not consistently predictable.
Paul Martin said:
Except for a small quibble, I find this definition to make sense and I would accept it.
Wonderful!

moving finger said:
I am not saying that the world is necessarily determinsitic, but I think you will find that the above definition is entirely consistent with determinism, and also consistent with the way that humans (who claim to have free will) actually behave.
Paul Martin said:
Yes, you are right. I did find it consistent with determinism and with human behavior.
Even more wonderful!

moving finger said:
There is no mystery involved in this definition or in the way that free will operates.
Paul Martin said:
OK, but there still remains the slightly nagging question of whether or not there is free will in the naïve sense. (Could I really have taken a lunch break? I just don't know.)
If you would define “free will in the naïve sense” then I could tell you.

moving finger said:
I agree the MF definition does not accord with the naïve conception of free will - but that is because the naïve conception of free will is based on unsound reasoning, and leads to a kind of free will which is not possible.
Paul Martin said:
That could very well be the reason.
Wonderful!

moving finger said:
No mystery.
Paul Martin said:
Not one worth debating anyway.
Even more wonderful!

Does this mean that you now accept my suggested changes to your necessary conditions? (ie that agents "believe that thay have infallible knowledge" of options, rather than agents "have infallible knowledge" of options?)
MF
:smile:
 
  • #129
moving finger said:
Does this mean that you now accept my suggested changes to your necessary conditions? (ie that agents "believe that thay have infallible knowledge" of options, rather than agents "have infallible knowledge" of options?)
Yes. I accept your changes. I think you have improved on my original conditions. Thank you.

I am by nature slow but persistent. It took me a while but after thinking about your arguments, I finally saw that you are right. Sorry it took so long, and thank you for your effort.

Paul
 
  • #130
Paul Martin said:
Yes. I accept your changes. I think you have improved on my original conditions. Thank you.

I am by nature slow but persistent. It took me a while but after thinking about your arguments, I finally saw that you are right. Sorry it took so long, and thank you for your effort.

Paul

You are most welcome.

We have arrived at our necessary conditions for free will :
1. The agent must be conscious.
2. The agent must believe that multiple options for action are available.
3. The agent must know (or believe that it knows) at least something about the probabilities of near-term consequences of at least some of the options in case they are acted out.
4. The agent must be able to choose and execute one of the options in the folklore sense of FW.

The above conditions (IMHO) are compatible with a deterministic world; they are also compatible with my definition of free will, as well as being (IMHO) an accurate description of exactly what humans experience when they claim to be acting as free agents.

moving finger said:
MF definition of free will : the ability of an agent to anticipate alternate possible outcomes dependent on alternate possible courses of action, and to choose which course of action to follow and in so doing to behave in a manner such that the agent’s choice appears, both to itself and to an outside observer, to be reasoned but not consistently predictable.

As for the concern you expressed about the existence naïve free will :
Paul Martin said:
OK, but there still remains the slightly nagging question of whether or not there is free will in the naïve sense. (Could I really have taken a lunch break? I just don't know.)
What I believe you mean here is : “If I had my time over again, could I have done otherwise than what I did?”. This IMHO is the naïve concept of free will, it is the concept usually espoused by Libertarians, and it is the concept we naturally think of based on “gut feeling” and “intuition” without really thinking rigorously about the issue.

My answer : Does it really matter whether you “could” have taken a lunch break or not? The fact is that “you were able to consider the option of taking a lunch break”, and "you believed at the time that this was an option available to you", and “you were able to evaluate the advantages and disadvantages of taking a lunch break”, and at the time of your decision you were NOT coerced into NOT taking a lunch break, and (most importantly) you did what you wanted to do at the time, which was "not take a lunch break".

If you could replay that time over again, with literally everything the same way as it was before, then the same things would happen – you would consider the option, you would believe the option is available, you would evaluate advantages and disadvantages, you would not be coerced, and you would once again DO WHAT YOU WANT TO DO, which is "not take a lunch break".

What I believe most Libertarians ACTUALLY MEAN when they ask “if I could replay the same situation exactly as before, could I have done otherwise than what I actually did?” is in fact that they want the "freedom" to NOT replay it exactly as it was before, they want to be able to "choose differently" which means in turn they want to be able to "want to choose differently", which is NOT REPLAYING EXACTLY AS IT WAS BEFORE. The Libertarian who thinks he can replay and choose differently is therefore (IMHO) deceiving himself into thinking that he is actually replaying the same situation, when in fact he is not.

To the naïve question of free will expressed as “if I could replay the same situation EXACTLY as before, could I have done otherwise than what I actually did?” the answer is (IMHO) NO, YOU COULD NOT HAVE DONE OTHERWISE, BUT IT DOESN’T MATTER!

MF
:smile:
 
Last edited:
  • #131
moving finger said:
I think one will find that if one models the above RA mechanism and examines it rationally and logically, looking at the possible sequences generated, then one will find that the introduction of the RNG prior to the moment of choice acts in much the same way as introducing the RNG after the moment of choice. In both cases, there is a point at which a deterministic choice is made by the agent based on alternatives available, but in both cases the final result is in fact random. This is not free will. This is simply a random-choice-making mechanism
.

Tournesol said:
it isn't the same. Intorducing randomness after choice removes 'ownership'. The hypothetical AI wouldn't be able to explain why it did as it did.
Tournesol,

I just realized that I misunderstood your comment here. Apologies. Let me reply correctly this time :

I agree that in the case of the RNG after the moment of choice, the agent would not be able to explain why it chose what it did choose.

But on the other hand, in the case of the RNG before the moment of choice, the agent would not be able to explain why it considered the alternatives that it did consider – it would in fact have no control over the alternatives being considered because those alternatives are being generated, not by any rational process within the agent, but randomly.

In both case, the outcome is random.

In both cases, the agent does not completely control what it does.

MF
:smile:
 
  • #132
Tournesol said:
It's to do with the ability to have done otherwise .
Seems like a harmless expression doesn't it? Surely it stands to reason that all free will agents "have the ability to have done otherwise"?

I wish to show that this naive Libertarian concept of free will is an impossibility.

Libertarians seem to believe that "free will" is somehow associated with the fact that "if one could replay the circumstances exactly the same as before, then one must have been able to have done otherwise than what one actually did".

For example, one hour ago I could have chosen to take a lunch break, or I could have chosen to continue typing. In fact, I chose to continue typing. The Libertarian would say that if I could replay the circumstances exactly the same as before, then (if I have free will) I must have been able to choose to take a lunch break rather than to continue typing.

At first sight, this idea seems intuitively "right"; our naive impression of free will is surely that we can choose to do whatsoever we wish, and therefore (our intuition tells us), if we have free will then that also means that, given identical circumstances, we still must have been able to do otherwise than what we actually did?

Let us analyse this seemingly "obvious" statement a little more closely.

Firstly, what do we mean by "circumstances exactly the same as before"? Do we mean simply that the circumstances should be similar, but not necessarily identical? No, of course not, because obviously if the circumstances were even slightly different then that might affect our choice anyway, regardless of whether we "choose freely" or not.
Therefore, when we say "circumstances exactly the same as before" we do mean precisely the same, including our own internal wishes, desires, volitions.

Secondly, what do we mean by "able to have done otherwise"?
Do we mean "physically able", in the sense that one is physically capable of carrying out different actions? No of course not.
Do we mean "able to choose", in the sense that one is capable of selecting one of among various alternatives?
This seems closer to what we actually mean. But surely "our choice" is determined by "us"; we "freely" decide our choice based upon the prevailing circumstances.

Now combine these two. Repeat the scenario, with "circumstances exactly the same as before".

If circumstances are indeed exactly the same as before, then all of our internal wishes, desires, volitions etc will also be exactly the same as before. In which case, why one Earth would we WANT to choose differently than the way we did before? Replay the scenario with exactly the same conditions, and any rational "free thinking" agent will choose exactly the same way each and every time. The only reason why it should ever "choose differently" in the carbon-copy repeat is if there is some element of indeterminism in the choice - but do Libertarians REALLY want to say that their free will choices are governed by indeterminism? I think not.

My answer to this naive Libertarian concept of free will : Does it really matter whether I “could” have taken a lunch break or not?

The fact is that “I was able to consider the option of taking a lunch break”, and in addition "I believed at the time that this was an option available to me", and even “I was able to evaluate the advantages and disadvantages of taking a lunch break”, and furthermore at the time of my decision I was NOT coerced into NOT taking a lunch break, and (most importantly) I did what I wanted to do at the time, which was "not take a lunch break".

If I could replay that time over again, with literally everything the same way as it was before, then the same things would happen – I would consider the options, I would believe the options are available, I would evaluate advantages and disadvantages, I would not be coerced, and I would once again DO WHAT I WANTED TO DO, which is (because the circumstances are identical) "not take a lunch break".

What I believe most Libertarians ACTUALLY MEAN when they say "if one could replay the circumstances exactly the same as before, then one must have been able to have done otherwise than what one actually did" is in fact that they want to have the "freedom" to NOT replay it EXACTLY as it was before, they want in fact to be able not only to "choose differently" to the way they did before, but they also want to "want to choose differently", which is then NOT REPLAYING EXACTLY AS IT WAS BEFORE.

The Libertarian who thinks he can replay perfectly and still choose differently is therefore (IMHO) deceiving himself.

The naïve concept of free will is expressed as “"if one could replay the circumstances exactly the same as before, then one must have been able to have done otherwise than what one actually did"

- and the rational response is (IMHO) YOU COULD NOT HAVE DONE OTHERWISE THAN WHAT YOU DID, BUT IT REALLY DOESN’T MATTER!

MF
:smile:
 
  • #133
moving finger said:
I have no idea what you mean by strong free will
(but from the rest of your post I suspect we have some similar beliefs)

May I ask - do you believe your concept of free will is compatible with determinism?
MF
:smile:

Sure. I suppose I can formulate what I mean when I say that an action of mine is free:

Any action X is a freely willed action if, and only if, the impulse to carry it out was internal to my own psyche and I was conscious of this impulse. This basically just means that so long as I ordered the action, then it's freely willed. Even if this I is nothing more than a particular unique confluence of physical and historical forces networking to determine the behavior of my body, that's fine with me. It doesn't even matter if I couldn't have done otherwise. 'Strong' free will is just that and seems to be what everyone else wants - contracausal, non-deterministic, and could have done otherwise.

By the way, Paul Martin asked a while back what I meant by by distinction between 'experiential' and 'functionalist' knowledge. Functionalist isn't the best word to use, as it conjurs up images of psychological theories that I'm not endorsing, and these aren't accepted technical terms or anything, so I probably should explain. Maybe a better distinction would be between conscious and non-conscious knowledge, since the point that I was trying to make was simply that I don't agree that knowledge is just the experiential state that one is one when one acquires knowledge. For instance, everyone in this thread likely knows that 2+37=39, even though they may not have been thinking about it at that time. Given the results of hypnotic therapy and such, it's entirely possible that you have knowledge of the past that you are not and may never be conscious of. This knowledge (suppressed memories) would fit the non-conscious knowledge mold but wouldn't fit what I meant by functional knowledge, as functional knowledge has to be usable in some way. A good example of what I meant by functional, non-experiential knowledge, is a typist's knowledge of the keyboard. I know exactly where all of the keys are on the board and use that knowledge to type out words on a screen. Rarely am I conscious of where the keys are, however. I'm certainly not thinking about it; I'm just thinking about the words I want to produce. In the same way, a good pitcher never thinks about the mechanics needed to produce a good curveball; he just throws the pitch. Nonetheless, he must have knowledge of those mechanics in order to have the ability to throw a curveball in the first place.
 
Last edited:
  • #134
And loseyourname, you could add that if some of those causes were randomly altered, that would change either the given parameters, or your desires, and you either would want to do differently because of the different causes, or else you would want differently because your desires were different, but in neither case would you be acting freely.
 
  • #135
selfAdjoint said:
And loseyourname, you could add that if some of those causes were randomly altered, that would change either the given parameters, or your desires, and you either would want to do differently because of the different causes, or else you would want differently because your desires were different, but in neither case would you be acting freely.

Well, they'd be free under my conception, but I suppose not in a strong sense. I'm kind of with Stace's language analysis, though, when he demonstrates that the common usage of the term 'free' only denotes that an action was not compelled by an external force as a proximate cause. As someone said (I can't remember who), we may be free to do as we please, but we are not free to please as we please.
 
  • #136
selfAdjoint said:
... in neither case would you be acting freely.
Can you please define what you mean by "acting freely"?

Thanks

MF
:smile:
 
  • #137
loseyourname said:
Sure. I suppose I can formulate what I mean when I say that an action of mine is free:

Any action X is a freely willed action if, and only if, the impulse to carry it out was internal to my own psyche and I was conscious of this impulse. This basically just means that so long as I ordered the action, then it's freely willed. Even if this I is nothing more than a particular unique confluence of physical and historical forces networking to determine the behavior of my body, that's fine with me. It doesn't even matter if I couldn't have done otherwise.
Agreed. And this kind of free will is indeed compatible with determinism.

My preferred definition of free will I think you will find agrees completely with your above description :
"Free will is the ability of an agent to anticipate alternate possible outcomes dependent on alternate possible courses of action and to choose which course of action to follow and in so doing to behave in a manner such that the agent’s choice appears, both to itself and to an outside observer, to be reasoned but not consistently predictable."

loseyourname said:
'Strong' free will is just that and seems to be what everyone else wants - contracausal, non-deterministic, and could have done otherwise.
This kind of “Strong” free will, IMHO, is a “will-o-the-wisp” and cannot exist. This seems to be the kind of “free will” that Libertarians want, but I have yet to find anyone who can both unambiguously define it and rationally defend it.

MF
:smile:
 
  • #138
loseyourname said:
As someone said (I can't remember who), we may be free to do as we please, but we are not free to please as we please.
Peter O'Toole, as the character T E Lawrence in the epic Lawrence of Arabia, says (in a memorable scene with Omar Sharif, where he finally comes to terms with his limited ability to change the course of history in the Arabian peninsular) :

"We are free to do what we want. But we are not free to want what we want."

MF
:smile:
 
  • #139
lyn said:
Sure. I suppose I can formulate what I mean when I say that an action of mine is free:

Any action X is a freely willed action if, and only if, the impulse to carry it out was internal to my own psyche and I was conscious of this impulse.

But you can be consciously aware of impulses that are not your conscious wish. A kleptomaniac is consciously aware of an impulse to steal, originating within herself, but it is not her wish or will to steal.

This basically just means that so long as I ordered the action, then it's freely willed.

what does "I ordered" mean ?


MF said:
My preferred definition of free will I think you will find agrees completely with your above description :
"Free will is the ability of an agent to anticipate alternate possible outcomes dependent on alternate possible courses of action and to choose which course of action to follow and in so doing to behave in a manner such that the agent’s choice appears, both to itself and to an outside observer, to be reasoned but not consistently predictable."

That is compatible with indeterminism as well as determinism. Depending on what you mean by "choose" it might even require indeterminism.

'Strong' free will is just that and seems to be what everyone else wants - contracausal, non-deterministic, and could have done otherwise.

If the universe is indeterministic, there is nothing miraculous about the ability
to have done otherwise.
 
  • #140
Tournesol said:
But you can be consciously aware of impulses that are not your conscious wish. A kleptomaniac is consciously aware of an impulse to steal, originating within herself, but it is not her wish or will to steal.

That's a compulsion, not an impulse. Subtle difference.

what does "I ordered" mean ?

I made a decision to take any given particular action.

That is compatible with indeterminism as well as determinism. Depending on what you mean by "choose" it might even require indeterminism.

I don't think you were responding to me here, but I certainly don't view choices as indeterministic. They certainly can be, but don't have to be (go back to the Mars Rover example).

If the universe is indeterministic, there is nothing miraculous about the ability to have done otherwise.

There's nothing willed about it, either.
 
  • #141
loseyourname said:
That's a compulsion, not an impulse. Subtle difference.

Yes, but the salient difference isn't made explicit by your definition. Consc. awareness is not enough -- consc approval is also required.

I made a decision to take any given particular action.

the problem of defining FW is to 'unpack' it, not just substitute synonyms.


There's nothing willed about it, either.

How do you know? Can you demonstrate that FW is not just a particular complex combination of deterministic and undetermined events and processes?
Do you insist, along with MF that it must involve a ghost ?
 
  • #142
I am determined to inform those who give a difference that reality does not.
As for sub/conscious decisions being determined the subconscious is programmed beforehand both by experience and conscious deliberation and reasoning.
 
  • #143
Tournesol said:
Can you demonstrate that FW is not just a particular complex combination of deterministic and undetermined events and processes?
Do you insist, along with MF that it must involve a ghost ?
The determinist can demonstrate that the "feeling of free will" which many humans claim to possesses is easily explainable on the basis of pure determinism, with no need to invoke any metaphysical concepts of libertarian free will. This then becomes a scientific "hypothesis", based on accepted understanding of how the world works, which explains the human feeling of free will. It is accepted scientific practice that scientific hypotheses are not proven true, they are only ever proven false.

See, for example : http://www.wjh.harvard.edu/%7Ewegner/pdfs/Wegner&Wheatley1999.pdf

The defender of libertarian free will is also free (no pun intended) to propose an hypothesis of how libertarian free will is supposed to work (ie how the human feeling of free will is explained based on the real existence of some kind of libertarian free will) - but as in the case of the determinist hypothesis, such a libertarian explanation should be based on accepted understanding of how the world works, and be free of metaphysical "sleight of hand" (otherwise it risks being branded as incoherent). Can the libertarian do this?

Best Regards

MF

If one pays attention to the concepts being employed, rather than the words being used, the resolution of this problem is simple (Stuart Burns)
 
Last edited by a moderator: