How can brain activity precede conscious intent?

  • Thread starter Thread starter Math Is Hard
  • Start date Start date
  • Tags Tags
    Delay
Click For Summary
Research by Benjamin Libet and Bertram Feinstein indicates a half-second delay between brain activity and conscious sensation reporting, suggesting that electrical signals related to motor tasks can occur before conscious intent to act. This raises questions about the nature of free will, as some argue that actions may be initiated unconsciously, with conscious awareness only intervening to veto actions. Critics of Libet's findings point out the complexity of distinguishing between conscious decisions and subconscious processes, questioning the reliability of measuring conscious awareness. The discussion highlights the philosophical implications of these findings, particularly regarding the relationship between consciousness and reality. Overall, the debate centers on whether free will exists if actions can precede conscious intent.
  • #91
moving finger said:
Randomness ensures that an outcome is indeterministic. What does this have to do with "free will"?

It's to do with the ability to have done otherwise .

Again.
 
Physics news on Phys.org
  • #92
Tournesol said:
It's to do with the ability to have done otherwise .
But INDETERMINISM DOES NOT ENDOW FREE WILL.

I note that you choose to attempt answers only to the questions that you can answer

I also asked :
moving finger said:
How does the introduction of an indeterministic outcome suddenly endow "free will" to an agent that was previously bereft of "free will"?
and :

moving finger said:
Can you give an example?
Both of which you ignored.

With respect, Tournesol, it seems obvious to me from your reluctance to provide explanations that you do not understand the problem.

This is why I asked you to give an example of how your “randomness” is supposed to endow an otherwise deterministic agent with “free will”. You have not given such an example (I suspect because you cannot give one).
MF
:smile:
 
Last edited:
  • #93
Tournesol said:
It's to do with the ability to have done otherwise .
The Libertarian hypothesises that indeterminism is supposed to somehow mysteriously "allow the agent to have done otherwise" - in other words that the action of indeterminism at some stage in the agent's decision-making process somehow (but mysteriously) endows "free will" upon that agent.

Conversely, I suggest that the association of "free will" with indeterminism is erroneous, and the MOST that can ever be accomplished by the introduction of indeterminism anywhere into the agent's decision-making process is ...indeterminism!

Let us try to examine how the Libertarian hypothesis could possibly work.

Let us assume that at a particular point in time an agent is able to follow one of many different possible courses of action, and hence needs to make a very generic decision about "which course of action to follow" from the alternative possibilities available. The Libertarian would say that the agent is able to make a "free will" decision if and only if we can somehow correctly introduce indeterminism into the agent's decision-making process.

Now, if we introduce the indeterminism into the process BEFORE the agent makes a decision (antecedent indeterminism), then this could possibly be translated to "throwing up a different alternative course of action" for the agent to consider in it's decision-making process. But there are in fact no "alternative courses of action" that indeterminism can "throw up" which would not also be accessible to the agent via a purely deterministic process. In other words, a purely deterministic agent would have just as many possible different alternative courses of action available to it as would the agent operating with antecedent indeterminsim.

The introduction of indeterminism BEFORE the moment of the agent's decision does not therefore necessarily lead to a different range of possible alternative courses of action, it simply "introduces indeterminism" into the proceedings prior to the decision-making process and cannot in fact make any difference to the agent's "free will" to choose between the different alternative courses of action.

Now the Libertarian may say therefore that the indeterminism needs to be introduced subsequent to (rather than prior to) the agent's decision-making process. But I hope it is transparently obvious (without me having to explain the details) that any indeterminism in the process subsequent to the agent's decision simply makes the outcome indeterministic, and cannot possibly have any bearing on any free will of the agent during decision making!

Conclusion : There appears to be no way that introducing indeterminism into the agent's decision-making process can actually endow the agent with free will, therefore if an agent does not already possesses free will in the absence of indeterminism (as the Libertarian suggests), then no free will is possible. The Libertarian concept of free will is thus inconsistent.

MF
:smile:
 
  • #94
loseyourname said:
What's a "conscious agent" if you define it as necessarily separate from the brain and/or robot?
I'm not sure how to parse your question. If you are asking what I mean by "conscious agent", I mean any agent capable of experiencing consciousness as I experience it.

If you are asking whether I require that the conscious agent necessarily be separate from the brain and/or robot, the answer is "no".

loseyourname said:
If you take that phrase out of your formulation, the Mars rover meets your standards (unless you're using an experiential, rather than functionalist definition of the verb 'to know').
I'm not sure I know exactly what you mean by the terms 'experiential' and 'functionalist', but when I said "The conscious agent must know..." I meant that it must have the same sort of experience I have when I realize that I know something. I do not consider that a thermometer "knows" the temperature in that same way, nor does the computer "know" my account number in that same way.

If you take the phrase "conscious agent" out of my formulation, you will have obliterated my standards altogether. In my view, a conscious agent is absolutely necessary for the concept of free will to have the meaning I would ascribe to it. No matter how sophisticated an algorithm you might incorporate into a machine, so that it can pass the Turing test, convince Dennett that it is as conscious as he, act and respond like an intelligent human, or better, I would still maintain that it would not have free will unless it actually experienced consciousness the way I do. It would have to meet my three necessary conditions and know what it was doing in order to have free will.

In my view, the Mars rover has free will as long as the JPL scientist is attending to its operation; the free will is just not seated in the rover vehicle. And, in my view, humans have free will as long as they are awake; I just don't think the free will (or the consciousness in general) is seated in the brain.
 
  • #95
Math Is Hard said:
Thank you for your thoughts. I'm sorry I have been taking a long time to think through this. I am slow.
You're welcome. No need to be sorry; I assure you that you are no slower than I am.

Math Is Hard said:
What I still can't get is that this "conscious agent" that you mentioned seems to be an un/pre/sub conscious (still searching for the right word) agent since it is acting before any processing that occurs in the physical brain.
Before I start, I should point out that my views are very different from those of most other people. So be careful if you try to reconcile what I say with other things you read.

I would suggest that you call off your search for the right word. I think we are heading for trouble whenever we think that words are magic and that if we only pick the right one, everything will become clear.

I think the notion of free will is that you can do something you want to do, if and when you decide to do it.

Now that statement is loaded with words we need to pick apart too so we don't get into trouble. First, I used the term 'you' to identify the actor in this scenario. We are making an assumption we should acknowledge if we consider that the actor, "you", is the same in all three actions. In my view, that is a bad assumption. I think that "you" are composed of two separable entities: Your consciousness, and your physical body/brain. If you don't acknowledge that separation, then my analogy won't make sense.

There are three different kinds of "action" going on here: "wanting", "deciding", and "doing". Since your concern has to do with timing let's consider the sequence of events. I think you would agree that wanting, deciding, and doing should occur in that order, even though some actions like impulse buying might interchange some of them.

But, as I listed in my necessary conditions for free will, in order to really be a free will action, at least the "deciding", and "doing" must be accompanied by conscious knowing. (The "wanting" may be below the conscious radar in some "un/pre/sub consciousness".) So the question is, where does the "knowing" fit into the sequence of "deciding" and "doing"? It may fit in several places. You may know you want to do something long before you do it. Or, you may not consciously know you want to even though you decide to do it. Then if you actually make a conscious decision to do the thing, then, by the very nature of consciousness you know you are making the decision all the while during the transition from indecision to decision.

There might be some delay between having made the decision and actually doing the thing. You might have decided to let the action be triggered by some stimulus or you might just go ahead and do it as soon as you decided. At any rate, you know that you are doing it as soon as you do it. And, finally, you probably get some immediate feedback so that you know that you have done it soon afterwards.

All of this "knowing" is going on in consciousness. We shouldn't be hasty in assuming how this knowing correlates with brain functions, or "processing that occurs in the physical brain" as you put it.

The whole point of my Mars rover analogy was to clearly separate the functions of consciousness and knowing (resident in the JPL scientist) from the "processing that occurs in the physical brain" (resident in the rover and its on-board computer) and to exaggerate the delays in communication between them. So it is clear, as you say, that the conscious agent is acting before any processing that occurs in the physical brain. But the conscious agent is involved in "knowing" at several points along the process, and there will be a delay in the reporting of any of these incidences of "knowing".

Math Is Hard said:
Can we still call it a conscious agent if its commands occur before conscious awareness of giving the instructions?
Keep in mind that in my view conscious awareness occurs only in the conscious agent. The reporting of conscious awareness is a different thing. That would involve the conscious agent deciding to issue a report of the conscious experience and then doing it, along the same lines as we just discussed for doing anything else. Thus there would be a delay between the commands being issued and the reporting of the conscious awareness of the commands being issued. So the commands don't really occur before conscious awareness of giving them.

Math Is Hard said:
On another topic: Here is a possibility that I am considering. I send an instruction to the Mars Rover and this algorithm says, "over the next 3 minutes, at random intervals you will turn in a random direction". So consciously I have made the decision that the robot will perform random actions during the time span I have specified. This only happens because I decided it. This is why I don't buy any of these arguments against free will. No matter what the robot randomly chooses to do, it was I who gave the placed the order to act randomly (but in the desired fashion) in the first place.
I think there are two things going on here that are pretty easy to separate: a willful action and a random action. This is the same as me deciding to flip a coin. The decision to flip and the action of flipping are the result of free will on my part. But the result, of a tail or a head, is strictly random and not the result of my will. I don't think this presents any argument for or against free will.
 
  • #96
Paul Martin said:
The necessary conditions are (again IMHASO):
1. The conscious agent must know that multiple options for action are available.
2. The conscious agent must know at least something about the probabilities of near-term consequences of at least some of the options in case they are acted out.
3. The conscious agent must be able to choose and execute one of the options in the folklore sense of FW.

I hope you don’t mind if I suggest that we add a fourth necessary condition (which I know is implicit in your conditions, but here I am making it explicit) :

4. The agent must be conscious.

I would also respectfully suggest (IMHO) that “know” in the above is too strict, and in fact any agent which simply “believes that it knows” has the necessary condions for free will (reasoning : I suggest we can never have infallible foreknowledge, thus in a strict sense it is never possible to infallibly “know” about future options, the best we can do is to believe, or to believe that we know) and the 4 necessary conditions then become :

1. The agent must believe that multiple options for action are available.
2. The agent must know (or believe that it knows) at least something about the probabilities of near-term consequences of at least some of the options in case they are acted out.
3. The agent must be able to choose and execute one of the options in the folklore sense of FW.
4. The agent must be conscious.

Would you agree?

Would you also agree that all of the above necessary conditions are compatible with determinism?

If not, why not?

MF
:smile:
 
  • #97
Wow, a potentially very interesting thread became yet another playground for word games on determinism/indeterminism. You would do good to make that discussion more fruitful by focusing on its practical implications.

For instance, what good is restricting the freedom of an individual to be in a particular building for at least 7 hours a day, 5 days a week, 3/4 year, for most of the first two decades of her life? In the same manner what is good about restricting the freedom of her mind to study the same things at the same pace and from the same person?

When you start talking about freedom philosophically, please apply it to something practical. It helps to elucidate what you're talking about.


Now that that's done with, the first time I read about this half-second delay was in Fred Alan Wolf's The Dreaming Universe.. The author attributes this delay to mystical phenomenon where our actions are actually guided by the future, therefore, time and space are not what we think they are and blah blah blah blah. He has a Ph.D. in quantum physics, but apparently that doesn't guard him from being slightly off the wall. I think his interpretation is contrived and practically useless, but someone mentioned earlier that he/she would appreciate all links on the subject. Perhaps someone else here could make better use of that book.
 
Last edited:
  • #98
Telos said:
For instance, what good is restricting the freedom of an individual to be in a particular building for at least 7 hours a day, 5 days a week, 3/4 year, for most of the first two decades of her life? In the same manner what is good about restricting the freedom of her mind to study the same things at the same pace and from the same person?

It prepares her to sit in a cubicle for 8 or more hours a day, at least five days a week, for thrity or forty years, in order to earn her living. Unless she spend years cooped up in a house with immature children, which is almost worse!

Did you think we were called into this world to enjoy it?
 
  • #99
moving finger said:
I hope you don’t mind if I suggest that we add a fourth necessary condition (which I know is implicit in your conditions, but here I am making it explicit) :

4. The agent must be conscious.
I don't mind at all. Not only should this condition be included but I think it should be listed as number one. Furthermore, I think we should always use the adjective 'conscious' when mentioning the agent just so we don't lose sight of the important and necessary fact of consciousness.

moving finger said:
I would also respectfully suggest (IMHO) that “know” in the above is too strict, and in fact any agent which simply “believes that it knows” has the necessary condions for free will
Here I respectfully disagree. I tried to be careful in writing my conditions, and after reviewing them in the light of your suggestion, I stand by what I wrote. In my judgment, the 'ability to know' is the most fundamental of all of the aspects of consciousness. I suspect that most, if not all, the rest can be derived from the ability to know.

moving finger said:
I suggest we can never have infallible foreknowledge, thus in a strict sense it is never possible to infallibly “know” about future options
I agree with the fallibility of foreknowledge. I agree that in a strict sense it is not possible to infallibly know much if anything about future options. But I insist that the conscious agent must know *that* there are options available in order for there to be free will. If the conscious agent only suspected that there were options, or believed that there were options, then an action might be induced on that basis. But I would disqualify such an action as a free will action and lump it in with coin tosses.

moving finger said:
2. The agent must know (or believe that it knows) at least something about the probabilities of near-term consequences of at least some of the options in case they are acted out.
I would not agree to weaken this condition by including the parenthetical phrase for the same reason as above. I think I weakened it enough by including the "at least something about" and "at least some of" qualifiers.

moving finger said:
Would you agree?
No.

moving finger said:
Would you also agree that all of the above necessary conditions are compatible with determinism?
No. Not also, and not at all.

moving finger said:
If not, why not?
I am on thin ice here because I am never comfortable with any word ending in "ism". I just don't understand well enough what those words mean, and there is usually a society of specialists who claim ownership of those kinds of words, which together is enough to make me hesitant. But since you asked me, I'll try to answer your question.

First, let me define what I would mean if I were to use the term 'determinism'. To me, determinism means that the evolution of states over which determinism holds can follow only a single course. That is, there can be only one outcome in a deterministic system. In principle, this can be tested by restoring the initial conditions of the system and letting it evolve again. As many times as this is done, the outcome will always be the same.

If my necessary conditions for free will obtain, and you ran this "playback" thought experiment several times, the conscious agent could choose different options for the same conditions in different runs, thus producing different outcomes.
 
  • #100
I remember reading a paper about a week ago (God, I wish I could remember where I found it) that was talking about this same issue, and how to create a machine that could emulate the apparent freedom of human behavior. You simply create a program that can develop hypotheses based on memory about the outcomes of different courses of action. Based on initial programming along with whatever it has learned through experience, it chooses the course of action that is most desirable. If multiple outcomes are equally desirable or multiple actions will bring about the same outcome, then a random number generator is used to select one arbitrarily.

This machine would display all of the behavior you guys want from a free agent. It weighs options, choosing the best based on its preferences, and it could, in principle, choose differently each time if the possible courses of action make little difference to it. Its behavior would not be any more predictable than human behavior. The only thing it is lacking is consciousness. Do we really want to say that being conscious of your behavior is all that is required for free will? Does that mean a conscious rock would have free will?
 
  • #101
loseyourname said:
This machine would display all of the behavior you guys want from a free agent.
Behavior,yes. But behavior is a very unimportant aspect of this topic.
loseyourname said:
The only thing it is lacking is consciousness.
But... that's the only thing that *is* important in a discussion of consciousness. I also happen to think it is the most important thing that exists in the universe, but you don't have to buy into that just yet.
loseyourname said:
Do we really want to say that being conscious of your behavior is all that is required for free will?
Not me. I specified earlier in this thread exactly what I think is required for free will, the most important of which necessary conditions is consciousness.
loseyourname said:
Does that mean a conscious rock would have free will?
It would if and only if it met the other necessary conditions. The one about being able to execute a willful action would be the one where the rock would probably fail.
 
Last edited:
  • #102
selfAdjoint said:
Did you think we were called into this world to enjoy it?
You didn't ask me, but I'll give you my answer to your question anyway. Yes. I think we were called into this world to enjoy it. I think there are three other reasons as well: To create new things to enjoy, To help others enjoy, and to figure out how it all works.

I think each of us has some in-born compulsion to do some weighted combination of these things, the weightings varying quite a bit from individual to individual.

You had to ask.
 
  • #103
selfAdjoint said:
As the links make clear, Libet's own defense of free will is that the individual can "veto" the brain's action after it has begun and before the actual physical action begins. This seems to me as much sheer desperate invocation of magic as every other explanation of strong free will.
I would be interested to know what you mean by this. I agree with your general idea, and I was thinking that the 'veto power' is itself nothing more than an action of the brain, and therefore subject to the delay. Can't we say that the veto action also needs a readiness potential? And that the physical expression of that particular readiness potential (for the veto) is the supression of some former readiness potential (perhaps remaining motionless instead of throwing a punch)? This would support the illusion, but wouldn't Libet have thought of this?
 
  • #104
kcballer21 said:
I would be interested to know what you mean by this. I agree with your general idea, and I was thinking that the 'veto power' is itself nothing more than an action of the brain, and therefore subject to the delay. Can't we say that the veto action also needs a readiness potential? And that the physical expression of that particular readiness potential (for the veto) is the supression of some former readiness potential (perhaps remaining motionless instead of throwing a punch)? This would support the illusion, but wouldn't Libet have thought of this?

Well I have no problem with at least conjecturing that kind of thing subject to experimental investigation. But the point of Libet's expressed veto was that it be non-deterministic, that it have no explainable chain of causes. And as others have pointed out, that is really an incoherent desire.
 
  • #105
Paul Martin said:
Behavior,yes. But behavior is a very unimportant aspect of this topic.

What the heck? We're discussing whether or not actions are free. Are actions not a form of behavior? Don't you agree that being free to control your behavior against deterministic outputs should be manifested somehow in your behavior? Could a being with no behavior be free? Free to do what? It couldn't do anything.

But... that's the only thing that *is* important in a discussion of consciousness.

But . . . this is a discussion of free will, at least at this point. It isn't a discussion of consciousness. In order to make it a discussion of consciousness, we'll have to first conclude that no non-conscious being could ever have free will. Presumably this is because consciousness in this conception is a causal agent that is non-deterministic yet not competely random. So what does that mean? We're just back at step one. Saying something is free because it is conscious doesn't solve anything. Is conciousness an uncaused cause? Some kind of agent that makes decisions out of the blue according to no set of rules?

I also happen to think it is the most important thing that exists in the universe, but you don't have to buy into that just yet.

What is meant by 'important.' It's certainly important to me. Without it, I wouldn't have much else going for me.

Not me. I specified earlier in this thread exactly what I think is required for free will, the most important of which necessary conditions is consciousness. It would if and only if it met the other necessary conditions. The one about being able to execute a willful action would be the one where the rock would probably fail.

So what about our super Mars Rover, complete with learning software and a random number generator. Let's say that it is also designed in such a way that it is conscious. Its actions are still dictated by the same set of dynamic rules and random output and its behavior is exactly the same. Is it then free?
 
  • #106
Paul Martin said:
Here I respectfully disagree. I tried to be careful in writing my conditions, and after reviewing them in the light of your suggestion, I stand by what I wrote. In my judgment, the 'ability to know' is the most fundamental of all of the aspects of consciousness. I suspect that most, if not all, the rest can be derived from the ability to know.
Then I believe there is a fundamental problem with your concept of free will.
By "the agent knows" (as opposed to "the agent believes that it knows") I assume that you mean "the agent knows infallibly"? ie that the agent's knowledge is guaranteed to be 100% absolutely correct with no possibility of it being wrong?
I believe that such infallible epistemic "knowledge" is in principle not possible for an agent. IMHO therefore this "necessary condition" could never be met.

Paul Martin said:
I agree with the fallibility of foreknowledge. I agree that in a strict sense it is not possible to infallibly know much if anything about future options. But I insist that the conscious agent must know *that* there are options available in order for there to be free will.
This to me seems a contradiction.
Part of the "foreknowledge" of a future option is actually to "know whether it will be available or not". If infallible foreknowledge of future options is not possible (as you agree), then it seems to me that it follows trivially that the agent cannot know infallibly whether any particular future option will be available or not, ie it cannot know infallibly *that* there are options available. It can "believe that it knows" (I agree), but it cannot “know infallibly”.

Paul Martin said:
If the conscious agent only …. believed that there were options, then an action might be induced on that basis. But I would disqualify such an action as a free will action …...
Such an action may indeed not qualify as free will under your definition of free will, but your definition is not the only possible definition, and as I said above I do not see how your necessary condition (2) can ever be met if you insist on infallible knowledge.

Paul Martin said:
I would not agree to weaken this condition by including the parenthetical phrase for the same reason as above. I think I weakened it enough by including the "at least something about" and "at least some of" qualifiers.
My same reply as above.

Paul Martin said:
I am on thin ice here because I am never comfortable with any word ending in "ism". I just don't understand well enough what those words mean, and there is usually a society of specialists who claim ownership of those kinds of words, which together is enough to make me hesitant.
OK, please rest assured I am not trying to pull any “tricks” here. Let me provide my definition of determinism :

Definition of Determinism : The universe, or any self-contained part thereof, is said to be evolving deterministically if it has only one possible state at time t1 which is consistent with its state at some previous time t0 and with all the laws of nature.

Paul Martin said:
But since you asked me, I'll try to answer your question.

First, let me define what I would mean if I were to use the term 'determinism'. To me, determinism means that the evolution of states over which determinism holds can follow only a single course. That is, there can be only one outcome in a deterministic system. In principle, this can be tested by restoring the initial conditions of the system and letting it evolve again. As many times as this is done, the outcome will always be the same.
OK, I believe my definition agrees completely with this.

Paul Martin said:
If my necessary conditions for free will obtain, and you ran this "playback" thought experiment several times, the conscious agent could choose different options for the same conditions in different runs, thus producing different outcomes.
Interesting.
Why do you say the agent “could choose different options for the same conditions in different runs”?

And is what you say here derived logically from your stated definition of free will and necessary conditions for free will (in which case can you show how it follows), or is it simply an intuitive feeling that you have?

Some things to ponder on :
If the world is operating deterministically then the agent is also covered by this, hence it follows that the agent could NOT in fact "choose different options for the same conditions in different runs".

Thus if you are suggesting that the agent can "choose different options for the same conditions in different runs" this would seem to imply that the world (at least the part that is concerned with the agent's choice) is not operating deterministically.

But if the agent's choice is not determinisitic, then what is it? Indeterministic?

Would you care to explain how the introduction of indeterminism into the agent's method of choice endows that agent with "free will"?

Thanks!

MF
:smile:
 
Last edited:
  • #107
selfAdjoint said:
Did you think we were called into this world to enjoy it?
I must be deaf, I never heard any call! :biggrin:

MF
:smile:
 
  • #108
loseyourname said:
I remember reading a paper about a week ago ... how to create a machine that could emulate the apparent freedom of human behavior. You simply create a program that can develop hypotheses based on memory about the outcomes of different courses of action. Based on initial programming along with whatever it has learned through experience, it chooses the course of action that is most desirable. If multiple outcomes are equally desirable or multiple actions will bring about the same outcome, then a random number generator is used to select one arbitrarily.

This machine would display all of the behavior you guys want from a free agent. It weighs options, choosing the best based on its preferences, and it could, in principle, choose differently each time if the possible courses of action make little difference to it. Its behavior would not be any more predictable than human behavior.
Interesting.
All of what has been described above about free will in a machine (allowing for some woolliness in the language) I can see as being entirely compatible with a deterministic world.

loseyourname said:
The only thing it is lacking is consciousness. Do we really want to say that being conscious of your behavior is all that is required for free will? Does that mean a conscious rock would have free will?
It’s not suggested that being "conscious of your behaviour" is all that is required – read the “necessary conditions” posts above.

Show me a conscious rock, and if it also meets the other necessary conditions, then I'll show you a rock with free will. :biggrin:

MF

:smile:
 
  • #109
selfAdjoint said:
the point of Libet's expressed veto was that it be non-deterministic, that it have no explainable chain of causes. And as others have pointed out, that is really an incoherent desire.
That's a euphemism if ever there was one!

Let's call a spade a spade. If the veto is "non-deterministic" then this is the same as saying it is "random" or "indeterministic".

"Incoherent desire" is therefore a sugar-coated "random event".

MF
:smile:
 
  • #110
Definition of Free Will

loseyourname said:
So what about our super Mars Rover, complete with learning software and a random number generator. Let's say that it is also designed in such a way that it is conscious. Its actions are still dictated by the same set of dynamic rules and random output and its behavior is exactly the same. Is it then free?
Whether it is acting with "free will" depends on your chosen definition of "free will" and (importantly) whether that definition is self-consistent or not (ie free will defined in a non-self-consistent way simply cannot exist, no matter how intuitively "right" it feels).

I'll show you mine if you show me yours.

MF
:smile:
 
  • #111
moving finger said:
Then I believe there is a fundamental problem with your concept of free will.
Good. I am eager to examine any beliefs that challenge my own beliefs. I figure that is the best way to change my own beliefs, if they are due for a change, to bring them closer to the truth. Let's have a look.

moving finger said:
I assume that you mean "the agent knows infallibly"?
Yes.

moving finger said:
ie that the agent's knowledge is guaranteed to be 100% absolutely correct with no possibility of it being wrong?
Yes, to the 100% part and following. I am not aware of any guarantee, though. I strongly suspect that at least there is not one in writing.

moving finger said:
I believe that such infallible epistemic "knowledge" is in principle not possible for an agent. IMHO therefore this "necessary condition" could never be met.
I can understand how that belief could lead you to that opinion. And if "such infallible epistemic "knowledge" is in principle not possible for an agent" then I agree that we could logically conclude that my "necessary condition" could not be met.

But I don't share your first belief here. What exactly is the "principle" on which you seem to base it?

moving finger said:
This to me seems a contradiction.
Yes. And I think I see reason it seems that way.

moving finger said:
Part of the "foreknowledge" of a future option is actually to "know whether it will be available or not".
The reason there seems to be a contradiction is that our definitions of 'foreknowledge' are inconsistent. Here you claim that knowledge of whether the option is available is part of foreknowledge. I specifically excluded that knowledge from being part of foreknowledge. Here's what I said:

Paul Martin said:
I agree that in a strict sense it is not possible to infallibly know much if anything about future options. But I insist that the conscious agent must know *that* there are options available in order for there to be free will.

moving finger said:
then it seems to me that it follows trivially that the agent cannot know infallibly whether any particular future option will be available or not, ie it cannot know infallibly *that* there are options available. It can "believe that it knows" (I agree), but it cannot “know infallibly”.
It seems that way to you because you are using your definition of 'foreknowledge'. Since we are working on understanding my sufficient conditions, I must respectfully ask you to consider them using my definition for 'foreknowledge'. Otherwise my intent will be hopelessly confused and lost. Using my definition, the conscious agent can "know infallibly" that it has options.

Here's how I would sum up my view of this in plain words: The conscious agent could truthfully say the following about some free-will choice: "I know I can pick. I don't exactly know how to pick, and I don't know exactly what will happen if I pick. But I know I can pick." I say that the conscious agent has free will if and only if he/she can truthfully say something like that about a particular choice of action.


moving finger said:
My same reply as above.
As is mine, although I should explain that when I said "don't exactly know" I mean that at least something must be infallibly known.

moving finger said:
OK, I believe my definition agrees completely with this.
Yes, I think we see eye-to-eye as to what determinism is.

Where we might differ is in the identification of exactly what is deterministic and what is not. As I have said many times, but since it doesn't seem to take so it bears repeating, in my view reality consists of a conscious agent which has free will, and the thoughts of that conscious agent. Those thoughts constitute the *rest of* reality; it is the mysterious Void filled with nothing and at the same time physical universes.

So, in my view, free will inheres only in the conscious agent (hence the absolute requirement for consciousness in my conditions). The "rest of" reality, the universe(s), etc. may operate deterministically in part, or some actions within it may be determined by conscious will (always and only exercised by the one conscious agent.)

You can think of this picture as a person sitting at a computer running an implementation of a cellular automaton program. The program allows the person to hit a key at any time during the evolution of the patterns and stop the action, change any cell, and then resume the action. The evolution of the automaton's patterns are deterministic except for those times in which the person deliberately and consciously changes one or more cells. I think that's how reality works.
 
  • #112
Paul Martin said:
if "such infallible epistemic "knowledge" is in principle not possible for an agent" then I agree that we could logically conclude that my "necessary condition" could not be met.

But I don't share your first belief here. What exactly is the "principle" on which you seem to base it?
Heisenberg’s uncertainty principle would be a good starting point – I guess you have heard of it? This principle basically says that the world is indeed epistemically indeterminable. How would you incorporate this principle into your philosophy?

Since you seem to believe that infallible knowledge of possible future options (contrary to Heisenberg) is possible, would you care to give an example of what you consider to be such infallible knowledge?

Paul Martin said:
Part of the "foreknowledge" of a future option is actually to "know whether it will be available or not".
Paul Martin said:
The reason there seems to be a contradiction is that our definitions of 'foreknowledge' are inconsistent. Here you claim that knowledge of whether the option is available is part of foreknowledge.
This issue seems trivial.
Either the choice has not yet been made, and the agent believes choice options to be available, in which case these are “future options”, hence any supposed knowledge about them is knowledge about the future, hence foreknowledge is required.
Or the choice has been made, in which case I agree no foreknowledge is involved, but neither are there any options available (the choice has been made).

Paul Martin said:
I specifically excluded that knowledge from being part of foreknowledge. Here's what I said:

I agree that in a strict sense it is not possible to infallibly know much if anything about future options. But I insist that the conscious agent must know *that* there are options available in order for there to be free will.
With respect, all this achieves is that it includes the precondition “the conscious agent must know *that* there are options available” as part of your definition of free will. You have not actually shown that the precondition “the conscious agent must know *that* there are options available” can be met, you have simply asserted that this precondition needs to be met as part in order to render your definition of free will consistent.

Your definition of free will may be inconsistent.

Paul Martin said:
Using my definition, the conscious agent can "know infallibly" that it has options.
Your agent “can know infallibly” only because you have defined that the agent MUST know infallibly as part of your definition of free will. But defining that the agent MUST know infallibly in order to have free will does not in fact allow us to conclude that the agent CAN know infallibly. In other words, it may be the case that your definition of free will is not consistent (eg if it is not possible for an agent to know infallibly).

An analogy : It is possible to define free will as “the ability of an agent to have chosen differently to what it did actually choose”. It follows from this definition that for an agent to have free will, it must have been able to choose differently from what it did choose. But this does NOT prove that the agent could have chosen differently. All it proves is that IF the agent could have chosen differently then it also could have had free will, whereas if the agent could not have chosen differently then free will (as defined) is not possible.

In summary : What I am suggesting is that free will according to your definition implies EITHER that an agent has infallible knowledge of possible future options (this seems to be your interpretation) OR that free will as you have defined it is not possible (my interpretation, since I do not believe that an agent can have infallible knowledge of possible future actions).

Paul Martin said:
Here's how I would sum up my view of this in plain words: The conscious agent could truthfully say the following about some free-will choice: "I know I can pick. I don't exactly know how to pick, and I don't know exactly what will happen if I pick. But I know I can pick." I say that the conscious agent has free will if and only if he/she can truthfully say something like that about a particular choice of action.

Here is how I would re-phrase your summary in plain words :
The conscious agent could truthfully say the following about some free-will choice: "I believe that I know I can pick. I don't exactly know how to pick, and I don't know exactly what will happen if I pick. But I believe that I know I can pick." I say that the conscious agent has free will if and only if he/she can truthfully say something like that about a particular choice of action.

Paul Martin said:
You can think of this picture as a person sitting at a computer running an implementation of a cellular automaton program. The program allows the person to hit a key at any time during the evolution of the patterns and stop the action, change any cell, and then resume the action. The evolution of the automaton's patterns are deterministic except for those times in which the person deliberately and consciously changes one or more cells. I think that's how reality works.
Unfortunately, though it is clear that the cellular automaton program works deterministically, this does not give a clear idea of how the “person” operates. It seems to me that you have simply moved the problem from one level to another – it is not clear whether the “person” operates deterministically or not. How this free will actually works is still (in your model) a mystery.
MF

:smile:
 
  • #113
moving finger said:
Whether it is acting with "free will" depends on your chosen definition of "free will" and (importantly) whether that definition is self-consistent or not (ie free will defined in a non-self-consistent way simply cannot exist, no matter how intuitively "right" it feels).

I'll show you mine if you show me yours.

MF
:smile:

I don't personally believe in any concept of strong free will. All it means to me for an action to be free is that it is compelled by something internal to my own psyche, rather than by external coercion or pathology.
 
  • #114
loseyourname said:
I don't personally believe in any concept of strong free will. All it means to me for an action to be free is that it is compelled by something internal to my own psyche, rather than by external coercion or pathology.
I have no idea what you mean by strong free will
(but from the rest of your post I suspect we have some similar beliefs)

May I ask - do you believe your concept of free will is compatible with determinism?
MF
:smile:
 
  • #115
moving finger said:
Heisenberg’s uncertainty principle would be a good starting point – I guess you have heard of it? This principle basically says that the world is indeed epistemically indeterminable. How would you incorporate this principle into your philosophy?
Yes, I have heard of it. I would incorporate it into my philosophy by saying that the Uncertainty Principle applies to the world, which includes the physical universe, human bodies/brains, and the information available to the bodies/brains. I would say that it does not apply to reality as a whole which includes CC in addition to the world.

moving finger said:
Since you seem to believe that infallible knowledge of possible future options (contrary to Heisenberg) is possible
No. You missed the distinction again. I said that I believe infallible knowledge *that* options are available is possible. I admitted that knowledge *of* future options is probably incomplete or wrong.

moving finger said:
would you care to give an example of what you consider to be such infallible knowledge?
The certain knowledge I have that I can continue typing this response or I can take a break and have lunch. (You should interpret my use of 'I' here as 'TEOPM'. Readers who may be baffled should see my discussions with Moving Finger in the General Philosophy thread "A Constructive Critique of Libertarianism" for a definition of 'TEOx'."

moving finger said:
This issue seems trivial.
Yes, I agree it is a trivial issue. Nevertheless, I don't think we have successfully communicated what we each have been trying to say about the issue.

moving finger said:
Either the choice has not yet been made, and the agent believes choice options to be available, in which case these are “future options”, hence any supposed knowledge about them is knowledge about the future, hence foreknowledge is required.
OK, let's say the choice has not yet been made. I say that the conscious agent *knows* that the choice is available. If not, then this example would not qualify as a free will option. And, yes, it is a "future option" in the sense that the conscious agent knows that the option exists before the choice is made to exercise the option. This knowledge, "that the option exists", is required in my view. Moreover, I require that it be infallible knowledge. So the trivial issue is whether we include this infallible knowledge in the scope of the definition of 'foreknowledge'. I really don't care as long as you understand that I mean the infallible knowledge *that* an option exists must exist in order to have free will, even though much or all of the rest of the foreknowledge related to the option may be in doubt or unreliable.

moving finger said:
Or the choice has been made, in which case I agree no foreknowledge is involved, but neither are there any options available (the choice has been made).
I agree. Furthermore, this case has nothing to do with free will.

moving finger said:
With respect, all this achieves is that it includes the precondition “the conscious agent must know *that* there are options available” as part of your definition of free will.
The respect is graciously acknowledged, and with respect, I would submit that including preconditions is an expected part of making a definition. That is all I was attempting to achieve.

moving finger said:
You have not actually shown that the precondition “the conscious agent must know *that* there are options available” can be met, you have simply asserted that this precondition needs to be met as part in order to render your definition of free will consistent.
True.

moving finger said:
Your definition of free will may be inconsistent.
True. That is why I invite anyone to demonstrate any inconsistency. I would like to be among the first to know about it.

moving finger said:
Your agent “can know infallibly” only because you have defined that the agent MUST know infallibly as part of your definition of free will. But defining that the agent MUST know infallibly in order to have free will does not in fact allow us to conclude that the agent CAN know infallibly.
True. and True.

moving finger said:
In other words, it may be the case that your definition of free will is not consistent (eg if it is not possible for an agent to know infallibly).
I can see how this would make my definition vacuous, but I don't see any inconsistency if in fact infallible knowing were impossible.

moving finger said:
An analogy : It is possible to define free will as “the ability of an agent to have chosen differently to what it did actually choose”. It follows from this definition that for an agent to have free will, it must have been able to choose differently from what it did choose. But this does NOT prove that the agent could have chosen differently. All it proves is that IF the agent could have chosen differently then it also could have had free will, whereas if the agent could not have chosen differently then free will (as defined) is not possible.
I agree. Both this definition and mine have the same "weakness" in that we can't prove that the definition is not vacuous.

moving finger said:
In summary : What I am suggesting is that free will according to your definition implies EITHER that an agent has infallible knowledge of possible future options (this seems to be your interpretation) OR that free will as you have defined it is not possible (my interpretation, since I do not believe that an agent can have infallible knowledge of possible future actions).
I agree with this summary (except that I would insert 'some' in front of the first appearance of 'infallible'.)

I would further summarize it by saying, Either free will exists as I have defined it, or there is no such thing. You believe the latter.

moving finger said:
Here is how I would re-phrase your summary in plain words :
The conscious agent could truthfully say the following about some free-will choice: "I believe that I know I can pick. I don't exactly know how to pick, and I don't know exactly what will happen if I pick. But I believe that I know I can pick." I say that the conscious agent has free will if and only if he/she can truthfully say something like that about a particular choice of action.
We disagree here. I'd say that if this is all there were, then there is no such thing as free will.

Ummm. I think I have a choice to stop here and have lunch or to keep typing, but I'm not sure. Can I decide or not? Hmmm. I don't seem to be able to. I just keep typing for some reason. I'll bet it is because the entire history of the universe and my history as a body/brain moving about in it has set the stage so that right now I am typing away even though I am hungry. That's probably it. I probably couldn't stop and eat if I wanted to. There is no free will at all.

moving finger said:
Unfortunately, though it is clear that the cellular automaton program works deterministically, this does not give a clear idea of how the “person” operates.
**Exactly!** This is one of the main messages I was trying to get across. I think there is very little hope of getting a clear idea of *how* CC operates. But the cellular automaton example clearly shows *that* the "person" operates in a way that interferes with the otherwise deterministic evolution of the automaton.

moving finger said:
It seems to me that you have simply moved the problem from one level to another
**Exactly!** That is exactly what my world-view does. It takes the great mystery of the Hard Problem and moves it to another level which is outside the physical world. That leaves the physical world explainable and understandable and it reduces the mysteries of reality as a whole down to just this single mystery. It's like moving the mystery of music coming out of a radio back to the transmitter where it really originates and where it belongs.

moving finger said:
How this free will actually works is still (in your model) a mystery.
Yes. But then again, it is a mystery in every model.
 
  • #116
loseyourname said:
What the heck? We're discussing whether or not actions are free. Are actions not a form of behavior? Don't you agree that being free to control your behavior against deterministic outputs should be manifested somehow in your behavior? Could a being with no behavior be free? Free to do what? It couldn't do anything.
OK. OK. I should have said "relatively unimportant" rather than "very unimportant". Yes actions are a form of behavior, but by far and away most actions in this universe do not enter into the question of free will. What we are trying to figure out is the determinant for those actions which we suspect might be influenced or determined by free will. It is that determinant which I think is relatively important while the action itself (the behavior) is relatively unimportant. What the heck. I wasn't very clear. I'm sorry.

loseyourname said:
But . . . this is a discussion of free will, at least at this point. It isn't a discussion of consciousness. In order to make it a discussion of consciousness, we'll have to first conclude that no non-conscious being could ever have free will.
Good point. I have certainly jumped to that conclusion myself as is evident from my list of conditions for free will. I will be glad to retreat if someone can tell me the difference between conscious free will and non- or unconscious free will that makes any sense.

loseyourname said:
Presumably this is because consciousness in this conception is a causal agent that is non-deterministic yet not competely random.
For this to be the reason I think you would have to strengthen it by saying that consciousness is the *only* non-deterministic yet not completely random causal agent. But I agree that it is premature to make such a claim.

loseyourname said:
So what does that mean? We're just back at step one. Saying something is free because it is conscious doesn't solve anything.
I agree. I think you have to include my entire list of necessary and sufficient conditions.

loseyourname said:
Is conciousness an uncaused cause?
I think so.

loseyourname said:
Some kind of agent that makes decisions out of the blue according to no set of rules?
I think it can do that.

loseyourname said:
What is meant by 'important.'
What I meant was that I think consciousness is a necessary ingredient in any complete explanation for what goes on in reality, in particular for what goes on in the behavior of people.

loseyourname said:
So what about our super Mars Rover, complete with learning software and a random number generator. Let's say that it is also designed in such a way that it is conscious.
OK.

loseyourname said:
Its actions are still dictated by the same set of dynamic rules and random output and its behavior is exactly the same.
Not necessarily. If it is conscious, and if it met my necessary and sufficient conditions, then in different runs of my thought experiment the outcomes could be different even when the random number generator returned identical sequences in the different runs (which it must do if you run the thought experiment carefully and correctly).

loseyourname said:
Is it then free?
In your scenario, where its actions were still dictated by the same mechanisms, then no, it is not free. In my scenario where some actions may be determined by my necessary condition number 3, then yes, it would be free. IMHO.
 
  • #117
moving finger said:
would you care to give an example of what you consider to be such infallible knowledge?
Paul Martin said:
The certain knowledge I have that I can continue typing this response or I can take a break and have lunch.
Please let me just clarify and replay your examples here (this is important to ensure there is no misunderstanding caused by any ambiguity). I hope you do not mind if I also re-phrase your example in terms of an independent (conscious) agent rather than “I” or “me” (because of the confusion this has caused already).

What you are actually saying (correct me if I am wrong) is the following :
1 : The agent has certain knowledge that it will be able to continue typing a response (ie it has certain knowledge that an option, the option “to continue typing a response”, will be available to it, as an option, in the future).
2 : The agent has certain knowledge that it will be able to take a break and have lunch (ie it has certain knowledge that an option, the option “to take a break and have lunch”, will be available to it, as an option, in the future).

Firstly : Can you explain how it is (what is the mechanism whereby) the agent can acquire this “certain knowledge” that these options will in fact be available (as opposed to it simply BELIEVING that they will be available)?

Secondly : I suggest that the agent does not in fact have “certain knowledge” that these options (or any other options) will be available to it. In an extreme (admittedly improbable, but nevertheless possible) example, the agent could be destroyed in the next instant by an asteroid which hits its home town. This would wipe out its ability both to continue, and to take a break and have lunch, and all other options. The agent in fact does not have certain knowledge that it will not be destroyed in this way (or any other way) in the next instant, therefore it does not have “certain knowledge” that the options you have described will in fact be available to it. Generalising, I conclude that no agent can have certain knowledge that any particular future option will be available.

Paul Martin said:
I say that the conscious agent *knows* that the choice is available. If not, then this example would not qualify as a free will option. And, yes, it is a "future option" in the sense that the conscious agent knows that the option exists before the choice is made to exercise the option. This knowledge, "that the option exists", is required in my view.
I understand that you stipulate (as part of your definition of free will) it is REQUIRED that the “agent knows infallibly that the option exists” in order for the agent to have “free will” according to your definition of “free will”. With respect, this is not the issue. The issue is whether it is in fact POSSIBLE for an agent to know infallibly that an option exists. I believe that I have shown above such infallible foreknowledge is not possible. Conclusion : “Free will” according to your definition is not possible.

Paul Martin said:
I would further summarize it by saying, Either free will exists as I have defined it, or there is no such thing. You believe the latter.
I believe free will exists. But I would define free will differently to you (as indicated already by my suggested changes to your necessary conditions, which changes you do not accept)..

moving finger said:
Here is how I would re-phrase your summary in plain words :
The conscious agent could truthfully say the following about some free-will choice: "I believe that I know I can pick. I don't exactly know how to pick, and I don't know exactly what will happen if I pick. But I believe that I know I can pick." I say that the conscious agent has free will if and only if he/she can truthfully say something like that about a particular choice of action.
Paul Martin said:
We disagree here. I'd say that if this is all there were, then there is no such thing as free will.
According to your definition of free will, yes. According to my definition of free will, this is exactly what free will is.

moving finger said:
Unfortunately, though it is clear that the cellular automaton program works deterministically, this does not give a clear idea of how the “person” operates.
Paul Martin said:
**Exactly!** This is one of the main messages I was trying to get across. I think there is very little hope of getting a clear idea of *how* CC operates. But the cellular automaton example clearly shows *that* the "person" operates in a way that interferes with the otherwise deterministic evolution of the automaton.
And if the person is also operating deterministically?

moving finger said:
It seems to me that you have simply moved the problem from one level to another
Paul Martin said:
**Exactly!** That is exactly what my world-view does. It takes the great mystery of the Hard Problem and moves it to another level which is outside the physical world. That leaves the physical world explainable and understandable and it reduces the mysteries of reality as a whole down to just this single mystery. It's like moving the mystery of music coming out of a radio back to the transmitter where it really originates and where it belongs.
Moving the problem around without actually addressing the problem seems (with respect) to be rather pointless?

moving finger said:
How this free will actually works is still (in your model) a mystery.
Paul Martin said:
Yes. But then again, it is a mystery in every model.
I disagree. It depends on how one defines free will.
If one takes an idealistic approach and defines free will such that free will is impossible (the intuitive feeling of free will), then explaining how such free will operates will also be impossible (this to me seems to be your approach).
If however one takes a pragmatic approach and defines free will such that free will is possible (even though it may not provide a very satisfying or intuitively “nice” result in terms of the "feeling" of free will), then explaining how free will operates is also possible (this is my approach).

MF
:smile:
 
Last edited:
  • #118
moving finger said:
I hope you do not mind if I also re-phrase your example in terms of an independent (conscious) agent rather than “I” or “me” (because of the confusion this has caused already).
Not at all. Sorry for contributing to the confusion.

moving finger said:
Can you explain how it is (what is the mechanism whereby) the agent can acquire this “certain knowledge” that these options will in fact be available (as opposed to it simply BELIEVING that they will be available)?
No. And after thinking more carefully, I should amend my example by saying, "The certain knowledge I have that[, barring any malfunction of the PNS of Paul Martin (PNSPM),] I can continue typing this response...".

As for the mechanism, it is probably similar to the mechanism used to acquire the certain knowledge in the agent, when, working through PNSPM, the agent knows what green looks like as reported to the agent via the sensory and perceptive mechanisms of PNSPM.

moving finger said:
I suggest that the agent does not in fact have “certain knowledge” that these options (or any other options) will be available to it.
Would you say that the agent does not have certain knowledge of what green looks like as reported by a PNS?

moving finger said:
In an extreme (admittedly improbable, but nevertheless possible) example, the agent could be destroyed in the next instant by an asteroid which hits its home town.
Not in my cosmos, it couldn't. In my cosmos the agent does not live in the home town. The asteroid could wipe out the PNS -- and I have just corrected for that eventuality -- but in my view, not the agent.

moving finger said:
I believe that I have shown above such infallible foreknowledge is not possible.
I believe you have not.

moving finger said:
The issue is whether it is in fact POSSIBLE for an agent to know infallibly that an option exists.

moving finger said:
I believe free will exists. But I would define free will differently to you (as indicated already by my suggested changes to your necessary conditions, which changes you do not accept)..
I am beginning to waffle.

Your statement of the issue above got me wondering, "What does it mean 'to know infallibly'?". Simply to say "the agent knows" implies infallibility by the definition of the word 'know'. But that's hardly convincing. Your argument would say that it is never appropriate to assert "Y knows X" for any X or Y. But that would make the word 'know' useless.

But suppose the agent knows that it knows X. If indeed the agent knows X in the first place, knowing that it knows X in addition wouldn't strengthen the claim that it knows X. It would only provide additional knowledge which is outside or above the first circumstance, and which could in principle even inhere in a separate agent. We could have, for example, Agent B knows that Agent A knows X.

This led me in three or four different directions. First is to note that you and I, in this discussion, are in that circumstance. We are questioning whether we can know that Agent A knows X. That is a different question from, "Can Agent A know X". I think it may be possible that Agent A can know X while at the same time it is impossible for Agent B to know that Agent A knows X. If that possibility turns out to be the case, then we may not be able to resolve this issue here.

The second direction I am led is to extend the chain by supposing that the agent knows that it knows that it knows X. Does that help any? It seems to because now there is even more knowledge than before. What about extending the chain to a million links?

The third direction is to salt this chain with one or more 'believes': Can the agent believe it knows X? Know it believes X? Know it believes it knows? Know it knows it believes? Believe it believes it knows? Etc.

The fourth is to reintroduce Agent B to appear here and there in different versions of all those chains. For example, Can Agent B know that Agent A believes that Agent B knows X?

This is not meant to be silliness or sophistry, although it sounds like both. Instead, the point I am trying to make is that the issue you articulated is very complex. I think that to resolve it, we would need not only to identify X (the example of a fact that can be known), but we would also need to identify all the players (Agent A, Agent B, TEOMF, TEOPM, "I", "you", PNSMF, PNSPM) and the relationships among them, as well as the answers to many, if not all of those "chain" questions.

I am not prepared even to guess at the outcome of a resolution, but at this point I am willing to concede that my requirement for infallible knowledge may be unnecessarily strong. I'm not sure your proposed substitutions are the right ones either, however. Maybe it should be a longer chain of knowing and believing.

For the record, my view of the relationships among the players I listed are,

Agent A = Agent B = TEOMF = TEOPM = CC

PNSMF and PNSPM are separate and distinct chemical vehicles being driven by CC.

"I" and "you" are used ambiguously and should be identified with each use.

moving finger said:
And if the person is also operating deterministically?
The automaton was an analogy. Little is to be gained by staking much on the details of one of the analogs. But the analogy aside, you are asking about the consequences of the case where the conscious agent operates deterministically. I'd say in that case there is no free will.

moving finger said:
Moving the problem around without actually addressing the problem seems (with respect) to be rather pointless?
I don't think it is pointless. The point is that it provides a different hypothesis from which to work. My only suggestion is that we explore the hypothesis of a single consciousness and see where it leads. My suspicions are that it will be more fruitful than the hypothesis of "PNSx contains TEOx", or even "The physical world of PNSx contains TEOx".

moving finger said:
If however one takes a pragmatic approach and defines free will such that free will is possible (even though it may not provide a very satisfying or intuitively “nice” result in terms of the "feeling" of free will), then explaining how free will operates is also possible (this is my approach).
That may be true. But unless and until you actually produce that explanation for how free will operates, the mystery remains. As of this date, I still maintain that free will is a mystery in every model.

Much fun talking with you, MF. Thanks.

Paul
 
  • #119
Hi Paul.

I just thought I would comment on this statement. I hope you don't mind.
Paul Martin said:
But that would make the word 'know' useless.
Not at all, everyone may agree that knowing means exactly what you want it to mean, and they may even "know" some things; however, all they can really be sure of is that they think they know. That is the central issue of my work.

Have fun -- Dick
 
  • #120
Doctordick said:
Not at all, everyone may agree that knowing means exactly what you want it to mean, and they may even "know" some things; however, all they can really be sure of is that they think they know. That is the central issue of my work.
Yes, I agree there would still be a use for the word. But that's not the issue. I think the questions here are:

1. Is "knowing" the same thing as "knowing infallibly"?

2. Is it possible in principle to know anything?

3. Is it possible in principle to know that you know anything?

4. Is it possible in principle to know that another knows anything?

I think we all agree that 1=yes.

It sounds like you are saying 2=yes.

I think Moving Finger is saying 2=no.

I think you are saying 3=no, and that that is the central issue of your work.

I think MF would have to say 3=no since 2=no.

I think both of you would have to say 4=no since 3=no.

I would say that 1=2=3=yes and that 4 is a non-question since there is only one knower.

(Good to hear from you, Dick. I started another letter to you this morning, but I didn't get it finished or sent. You give me too much homework.)

Paul