How can brain activity precede conscious intent?

  • Thread starter Thread starter Math Is Hard
  • Start date Start date
  • Tags Tags
    Delay
Click For Summary
Research by Benjamin Libet and Bertram Feinstein indicates a half-second delay between brain activity and conscious sensation reporting, suggesting that electrical signals related to motor tasks can occur before conscious intent to act. This raises questions about the nature of free will, as some argue that actions may be initiated unconsciously, with conscious awareness only intervening to veto actions. Critics of Libet's findings point out the complexity of distinguishing between conscious decisions and subconscious processes, questioning the reliability of measuring conscious awareness. The discussion highlights the philosophical implications of these findings, particularly regarding the relationship between consciousness and reality. Overall, the debate centers on whether free will exists if actions can precede conscious intent.
  • #61
Doctordick said:
Ok! Presume that is possible and give me a single consequence (other than the "feel good about it" attitude the concept produces) that you can "squink" up. And that "squat" we can think about. :smile:

if we did not have FW we would have two options

1) to solve problems by some a-priori pre-programmed instinct [*]

2) not to solve them at all

With FW, we have the further option to

3) Solve problems by trial-and-error experimentation

(which is an also an example of a successful strategy, or rather meta-strategy).

[*] You could claim that we can solve problems with strategies we learn
from our elders, rather than intinctively, but such strategies have to originate
from somewhere, so this is a variation on (3)
 
Physics news on Phys.org
  • #62
Tournesol said:
if we did not have FW we would have two options

1) to solve problems by some a-priori pre-programmed instinct [*]

2) not to solve them at all

With FW, we have the further option to

3) Solve problems by trial-and-error experimentation

(which is an also an example of a successful strategy, or rather meta-strategy).

[*] You could claim that we can solve problems with strategies we learn
from our elders, rather than intinctively, but such strategies have to originate
from somewhere, so this is a variation on (3)

Not at all. Our brains, even if deterministic,, can have access to a pseudorandom number generator, which will stand in for a fair coin toss for all
practical purposes, and thus randomized strategies become available. Computer systems, which I don't suppose you consider to have free will, do this all the time. See genetic programming, monte carlo simulation, etc. etc.
 
  • #63
Tournesol said:
With FW, we have the further option to

3) Solve problems by trial-and-error experimentation
Now please explain why FW is necessary to solve problems by "trial-and-error"? :confused: To my mind "trial-and-error" is the very definition of evolutionary elimination of failure. :smile:

Have fun -- Dick
 
  • #64
selfAdjoint said:
Not at all. Our brains, even if deterministic,, can have access to a pseudorandom number generator, which will stand in for a fair coin toss for all
practical purposes, and thus randomized strategies become available. Computer systems, which I don't suppose you consider to have free will, do this all the time. See genetic programming, monte carlo simulation, etc. etc.

It all depends what you mean by FW. For compatiblists like Dennett, PRN's are enough. OTOH, there is objective evidene of real randomness, and the
subjective feeling of elbow room -- why not use th eone to explain the other ?
 
  • #65
Doctordick said:
Now please explain why FW is necessary to solve problems by "trial-and-error"?

You need some way of settling what to do next in the absence of a pre-programmed methodology. Whether that is 'real' FW is matter of defintion -- see my other reply.

To my mind "trial-and-error" is the very definition of evolutionary elimination of failure.
Have fun -- Dick

Well, I was suggesting that FW is an evolutionary [ meta] strategey, asn't I?
 
  • #66
Tournesol said:
It all depends what you mean by FW. For compatiblists like Dennett, PRN's are enough. OTOH, there is objective evidene of real randomness, and the
subjective feeling of elbow room -- why not use th eone to explain the other ?


Well, because there is no real basis for doing so. Strong free will is just a desire we have, and making up reasons for your desires to be true is deluding yourself.
 
  • #67
I try again to get some help from someone reading this thread to understand a bit better Libet's experiment. My doubt is the delay it is actually measuring, as I said in earler post.
I guess it is a delay between the neuronal firing that indicates the beggining of a, supposedly intentional, action and the thought of having the intention to start that action.
Ok, it seems we start the action half a second before the thought of having the intention to.
But, in my view, perhaps a basic or raw feeling of having the intention is prior to the thought of having that intention. I mean, I can start an action when "I feel like doing something", as language says, which could be before "I think I feel like doing something".
An example: the athlete could start running when he hears the shot, not when he thinks "I've heard the shot" (it would be too late); the athlete starts running half a second before the thought, but not half a second before hearing the shot (otherwise he would be disqualified).
Something like that. I'd appreciate some help. Thanks.
 
  • #68
selfAdjoint said:
Well, because there is no real basis for doing so. Strong free will is just a desire we have,

That isn't even correct as a definiton of FW. The feature of FW that creates problems
with regard to determinism is the ability-to-have-done-otheewise.

and making up reasons for your desires to be true is deluding yourself.

That is back to front. If you have reason to believe FW is impossible (such as
reason to belieive in determinism and to reject compatiblism) , then
you have reason to conclude FW can only be an illusion. But
you are cetainly not entitled to start off on that basis.
 
  • #69
I have strong reason to believe the universe, including ourselves, is random where it isn't deterministic. This, as everybody agrees, destroys free will if we take it seriously and apply it to our conscousness. As a monist I do so. And Libet's experiment stands as an empirical demonstration of it.
 
  • #70
selfAdjoint said:
I have strong reason to believe the universe, including ourselves, is random where it isn't deterministic. This, as everybody agrees, destroys free will if we take it seriously and apply it to our conscousness.

There are a few exceptions to that rule, such as Robert Kane, and yours truly

http://www.geocities.com/peterdjones
 
  • #71
antfm said:
I try again to get some help from someone reading this thread to understand a bit better Libet's experiment. My doubt is the delay it is actually measuring, as I said in earler post.
I guess it is a delay between the neuronal firing that indicates the beggining of a, supposedly intentional, action and the thought of having the intention to start that action.
Ok, it seems we start the action half a second before the thought of having the intention to.
But, in my view, perhaps a basic or raw feeling of having the intention is prior to the thought of having that intention. I mean, I can start an action when "I feel like doing something", as language says, which could be before "I think I feel like doing something".
An example: the athlete could start running when he hears the shot, not when he thinks "I've heard the shot" (it would be too late); the athlete starts running half a second before the thought, but not half a second before hearing the shot (otherwise he would be disqualified).
Something like that. I'd appreciate some help. Thanks.
I think we can divide self-consciousness into pre-reflective self-consciousness and reflective or introspective self consciousness. The pre-reflective kind is our mode (probably) a majority of the time, as we are immersed in our activity in the world. Asking the subject to monitor awareness brings things into reflective mode, which it is reasonable to assume introduces some (additional?) delay. Commentators who divide modes simply into conscious and unconscious miss this important nuance.

Now it still is a meaningful result that the self which is felt to exist in our reflective mode can't be responsible for initiating action. To the extent this really is the folk concept of free will, then it seems to be refuted by the evidence.
 
  • #72
Thanks, Steve. We meet again. Yes, I totally agree. That is the part I thought that was missed from the usual interpretation of the experiment.

I am not especially interested in saving free will from refutation, but as much as the results of the experiment seem to prove that the folk concept of free will doesn't work, they could also point to the failure of the folk concept of self. As you say, we often dismiss that pre-reflective self-consciousness (and it would be a part of the self)

In the example of the athlete, she knows beforehand that she has to start running when hearing the shot. Start running when she hears it is part of her self behaviour, though it is not perhaps reflective self behaviour.

It is the same, I think, that when we drive in autopilot and in many other daily actions that happen without that reflective aspect. Even so, we claim that our selves are always in charge.

Anyway, I find your explanation very insightful. Thanks.
 
  • #73
Tournesol said:
That isn't even correct as a definiton of FW. The feature of FW that creates problems with regard to determinism is the ability-to-have-done-otheewise.

That simply isn't true. An electron in any given state had the option and could-have-done-otherwise, according to quantum mechanics. This hardly means that the electron has free will.
 
  • #74
loseyourname said:
That simply isn't true. An electron in any given state had the option and could-have-done-otherwise, according to quantum mechanics. This hardly means that the electron has free will.

That the electron could-have-done otherwise means there is an incompatibility between QM and strict causal deteminism.

That the people could-have-done-otherwise if hey have FW meansthere is an incompatibility between FW and strict causal deteminism.

If you are saying that could-have-done-otherwise is not sufficient for
FW, that would be correct, but I am not maintaining that it is.
 
  • #75
Tournesol said:
If you are saying that could-have-done-otherwise is not sufficient for FW, that would be correct, but I am not maintaining that it is.

Yes, that is what I'm saying. Perhaps we should enumerate exactly what we think the sufficient conditions are for a freely willed action in a volitional agent.
 
  • #76
loseyourname said:
Yes, that is what I'm saying. Perhaps we should enumerate exactly what we think the sufficient conditions are for a freely willed action in a volitional agent.

1) lack of external compulsion ( a gun pointed at ones head)

2) lack of internal compulsion (addicition) or other interference (sanity)

3) possession of the appropriate faculty of volition in the first place.

(1) and (2) are familiar from legal arguments, which take (3) for granted.

What (3) actually consists of is the philosophical point. Compatiblists and
incompatiblists disagree about whether could-have-done-otherrwise is a necessary
ingredient. Hardly anyone thinks it is sufficient.
 
  • #77
Tournesol said:
The question is whether a complex system like the brain can utilise
randomness to obtain "elbow-room" (the ability to have done otherwise)
without sacrificing rationallity. Given the limits on de-facto rationallity,
Ithink the answer is yes.
This brings to my mind a simple thought experiment. Suppose I have written a very complex computer program (one might think of virtual war game implementation or maybe even just a chess playing program) where extended computation of possible consequences are implemented at every step and reckoned against some value reference. Now, when the values at a step were equal and the computation power of the machine would be exceeded (or might take too long if all possible paths were followed and please note that such a thing might even occur during the value reckoning phase), suppose we use a random number generator governed by a phenomena within the Heisenberg uncertainty limitation. Now several things happen here. First, I can certainly have the computer print out that final result which yielded the scenario with the highest value (which we could call the computers hoped for final result). Second, I could also have the computer print out the sequence which it went through to reach that final scenario and where his doubts lay (the places he relied on the random number generator or he was "guessing") and finally, the result certainly would not be completely predictable as it depends directly on a number of absolutely random events.

Now the machine will make decisions for reasons it can list. Would one say it has "free will". I think it would at least act as if it had free will.

Have fun -- Dick
 
  • #78
Tournesol said:
Why can't FW be both what it is traditionally assumed to be and a "succesful behaviour" ?
I think I misunderstood you when you posted this originally. I thought you were asserting that it was exactly "what it is traditionally assumed to be". My position is, "of course, it could be." However, exactly "what it is traditionally assumed to be" needs to be considerably cleaned up before the meaning of the statement is clear.

Have fun -- Dick
 
  • #79
Tournesol said:
That is back to front. If you have reason to believe FW is impossible (such as reason to belieive in determinism and to reject compatiblism) , then you have reason to conclude FW can only be an illusion. But you are cetainly not entitled to start off on that basis.
Why not?

Have fun -- Dick
 
Last edited:
  • #80
Necessary and Sufficient Conditions for Free Will

loseyourname said:
Perhaps we should enumerate exactly what we think the sufficient conditions are for a freely willed action in a volitional agent.
In my humble and specultive opinion, the sufficient conditions for FW are:
1. A two-way communication link between brain (or robot) and the conscious agent.
2. A working connection between perception-related components of the brain (or robot) and the output side of that link.
3. A working connection between the motor function components of the brain (or robot) and the input side of that link.

The necessary conditions are (again IMHASO):
1. The conscious agent must know that multiple options for action are available.
2. The conscious agent must know at least something about the probabilities of near-term consequences of at least some of the options in case they are acted out.
3. The conscious agent must be able to choose and execute one of the options in the folklore sense of FW.
 
  • #81
Math Is Hard said:
... I find it baffling - it just doesn't seem possible - and I wondered what your thoughts were on this.
For what it's worth, here are my thoughts.

The delay is accounted for by the time it takes for information about the perception of the signal to travel on the link from brain to conscious agent, for the conscious agent to exercise a FW action, and for the signal to execute this action to travel back across the link to the brain. Part of the motor action is the expression of the report that conscious awareness of the stimulus and action has occurred.

What I would suggest people consider when trying to interpret this experiment is the possiblilty that consciousness is not seated in the brain but instead is somewhere that requires a measurable amount of time for a signal to travel between them. For an extreme analogy, think of the brain as the computer on a Mars rover and the conscious agent as the scientist at JPL driving the rover. The signal delay in this case is substantial.

If we perform Libet's experiment on the rover, we will stimulate the on-board computer and measure the reaction time. If the response can be made strictly from the rover without requiring communication with JPL, then this would be equivalent to a reflex action and consciousness would not be involved.

If the stimulation needs conscious attention before an action can be taken, then a round-trip communication with JPL must take place causing a long delay.

To duplicate Libet's "baffling" case, suppose the scientist at JPL wants to initiate some rover action, verify that the action occurred, and then report from the rover to an observer on Mars that the scientist knows that the action took place. The command would be sent to the rover initiating the action. The rover would then transmit back to JPL information about the results of the action. The scientist would then become aware of the action and send the signal back to the rover reporting that the action occurred. The delays involved would be obvious.
 
Last edited:
  • #82
Doctordick said:
Why not?

It's begging the question.
 
  • #83
Tournesol said:
It's begging the question.
It's begging what question? If you are going to go around discounting possibilities, it seems to me that your position is quite closed minded. I certainly do not claim infalible knowledge on any point.

You seem so rational when you talk to others (at least, in the great majority of cases, I find your responces to be quite rational) but your responses to my comments almost always surprise me. The only explantation I can comprehend at the moment is that you just don't understand what I am saying and I don't know where the fault lies.

Totally in the blind --Dick
 
  • #84
Paul Martin said:
For what it's worth, here are my thoughts.
Hi Paul,
Thank you for your thoughts. I'm sorry I have been taking a long time to think through this. I am slow. :redface: I thought about this some this morning just as I was waking and then I got up and drew some diagrams and tried to understand your analogy better.

What I still can't get is that this "conscious agent" that you mentioned seems to be an un/pre/sub conscious (still searching for the right word) agent since it is acting before any processing that occurs in the physical brain. Can we still call it a conscious agent if its commands occur before conscious awareness of giving the instructions?

On another topic: Here is a possibility that I am considering. I send an instruction to the Mars Rover and this algorithm says, "over the next 3 minutes, at random intervals you will turn in a random direction". So consciously I have made the decision that the robot will perform random actions during the time span I have specified. This only happens because I decided it. This is why I don't buy any of these arguments against free will. No matter what the robot randomly chooses to do, it was I who gave the placed the order to act randomly (but in the desired fashion) in the first place.

I'd be happy to hear your thoughts on this. I apologize if I misunderstood any of your comments in my naivete. :redface:
 
  • #85
Paul Martin said:
In my humble and specultive opinion, the sufficient conditions for FW are:
1. A two-way communication link between brain (or robot) and the conscious agent.
2. A working connection between perception-related components of the brain (or robot) and the output side of that link.
3. A working connection between the motor function components of the brain (or robot) and the input side of that link.

The necessary conditions are (again IMHASO):
1. The conscious agent must know that multiple options for action are available.
2. The conscious agent must know at least something about the probabilities of near-term consequences of at least some of the options in case they are acted out.
3. The conscious agent must be able to choose and execute one of the options in the folklore sense of FW.

What's a "conscious agent" if you define it as necessarily separate from the brain and/or robot? If you take that phrase out of your formulation, the Mars rover meets your standards (unless you're using an experiential, rather than functionalist definition of the verb 'to know').
 
  • #86
Tournesol said:
The question is whether a complex system like the brain can utilise randomness to obtain "elbow-room" (the ability to have done otherwise)
without sacrificing rationallity. Given the limits on de-facto rationallity,
Ithink the answer is yes.
IMHO, "free will" is not in any way dependent on the presence of randomness.

Perhaps you would care to explain how you can take an agent bereft of free will, and then suddenly endow free will simply by introducing some randomness into it's thought processes?

The idea is a non-starter.

See this thread for a much deeper discussion of the concepts involved :

https://www.physicsforums.com/showthread.php?t=71281

MF
:smile:
 
  • #87
Sorry to sound like a stuck record, but I've noticed that the debate in this thread revolves around the concepts of "free will" and "consciousness" - but have the participants agreed definitions of these concepts? (I quickly scanned the thread so I apologise if these definitions have been agreed already).

In so many debates I see people taking sides and arguing endlessly against each other, when in fact they are just wasting so much time because they are not defining things the same way.

Can anyone summarise the definitions of "free will" and "consciousness" that are pertinent to this debate?

Cheers!

MF
:smile:
 
  • #88
Doctordick said:
It's begging what question?

Let me reconstruct...

Tournesol said:
That is back to front. If you have reason to believe FW is impossible (such as reason to belieive in determinism and to reject compatiblism) , then you have reason to conclude FW can only be an illusion. But you are cetainly not entitled to start off on that basis.

<<ie not entitled to start off on the basis that FW can only be an illusion>>

DD said:
Why not?

Tournesol said:
It's begging the question

<<ie starting of on the basis that FW can only be an illusion is begging the question>>


If you are going to go around discounting possibilities,

Assuming FW must be illusory is discounting possibilities.
 
  • #89
moving finger said:
IMHO, "free will" is not in any way dependent on the presence of randomness.

Perhaps you would care to explain how you can take an agent bereft of free will, and then suddenly endow free will simply by introducing some randomness into it's thought processes?
How can an agent have FW without the ability to have done otherwise ?
 
  • #90
Tournesol said:
How can an agent have FW without the ability to have done otherwise ?
Randomness ensures that an outcome is indeterministic. What does this have to do with "free will"?

How does the introduction of an indeterministic outcome suddenly endow "free will" to an agent that was previously bereft of "free will"?

Can you give an example?

MF
:smile: