Does Neuroscience Challenge the Existence of Free Will?

  • Thread starter Thread starter Ken G
  • Start date Start date
  • Tags Tags
    Free will Neural
Click For Summary
The discussion centers on the implications of Benjamin Libet's research, which suggests that decisions occur in the brain before conscious awareness, raising questions about free will and determinism. Participants explore whether this indicates a conflict between determinism and free will, proposing that neurological processes may be deterministic while free will could exist in a non-physical realm. The conversation critiques the reductionist view that equates physical processes with determinism, arguing instead for a more nuanced understanding that includes complexity and chaos theory. The idea that conscious and unconscious processes are distinct is emphasized, with a call for a deeper exploration of how these processes interact in decision-making. The limitations of current neuroscience in fully understanding consciousness and free will are acknowledged, suggesting that a systems approach may be more effective than reductionist models. Overall, the debate highlights the complexity of free will, consciousness, and the deterministic nature of physical processes, advocating for a more integrated perspective that considers both neurological and philosophical dimensions.
  • #121
Ken G said:
I'm not saying we need to connect to the quantum scale, I view the quantum scale as simply an example of the kinds of unexpected correlations that emerge only when you know what to look for. For example, in a quantum erasure experiment...

The lessons of quantum mechanics are completely relevant to my POV. What better example of downward causation is there than the idea that "observers" are required to develop a classical crispness in reality?

A quantum event, as in the eraser experiments, is shaped by the global constraints of the path set up by an experimenter. Unless you go to extreme ontologies like hidden variables or many worlds, you have to say there is nothing concretely "there" at the microscale apart from a quantum potential that then gets shaped to have an identity by the particular structure of classical world.

The classical world acts downwards to decohere the unformed potential of the quantum realm. There is nothing fixed at the QM scale, until it has been fixed.

And also consistent with my systems view, the classical world can only constrain the quantum potential, not determine it. Downward causation can restrict the degrees of freedom, but not remove them all. So there is still that essential "randomness" or indeterminacy about what actually does happen (even if statistically, that indeterminacy has hard bounds).

Q Goest mentions that this is the case...and then hurriedly says he doesn't want to go there. Indeed, it would be fatal to his insistence on causal supervenience. (Unless he resorts to hidden variables, many worlds, the usual attempts to avoid the "weirdness" of downward acting constraints by leaping to far weirder views of nature that appear to preserve the mechanical principle that all that exists is bottom-up atomistic construction).
 
Physics news on Phys.org
  • #122
apeiron said:
The classical world acts downwards to decohere the unformed potential of the quantum realm. There is nothing fixed at the QM scale, until it has been fixed.
Yes, I agree quantum mechanics forms an excellent example of this kind of surprising phenomenon, even if brains are not manifestly quantum mechanical. In quantum mechanics, everything happens that can happen, and it is only sorting of possibilities by the outer-scale environment that decides what actually does happen and brings the phenomena into the classical realm of black-and-white thinking. There remains no self-consistent ontology (without paying a radical price of subordinating the observed phenomena to how we conceptualize it) for describing how the microcausations that appear in the Schroedinger equation can create a single emergent classical reality. Decoherence is how we treat this event, but it involves deciding what we will care about, not an internally self-consistent treatment. I think that is also a relevant analogy to what FEA does-- first decides what we will care about, and then sees what harvest we reap from those choices.
Q Goest mentions that this is the case...and then hurriedly says he doesn't want to go there. Indeed, it would be fatal to his insistence on causal supervenience.
I think the main source of disagreement is a choice of stance-- Q_Goest takes the position that he would prefer not leave a certain comfort zone, with demonstrable benefits, until he is absolutely certain it is required to do so, and what's more, he will tend to rig the meaning of "success" to increase its likelihood. We are taking the tack that it is better to jump into the unknown and murkier waters, and hope to discover something fundamentally new, than to avoid it simply because it is so murky.
(Unless he resorts to hidden variables, many worlds, the usual attempts to avoid the "weirdness" of downward acting constraints by leaping to far weirder views of nature that appear to preserve the mechanical principle that all that exists is bottom-up atomistic construction).
Interestingly, even many-worlds is not really an atomistic construction, it is about as holistic as they come. It begins and ends with accepting that the concept of a "state vector for the universe" is a meaningful and coherent construct, because if you start with that, and trust the Schroedinger equation, then you will always have it (and its encumbent many conscious worlds emerging within), and if you do not start with it, then you will never have it appear. You get out exactly what you put in, and no experiment tells you if putting it in has done anything useful for you, so it's purely a desire to apply top-down imagery that motivates many worlds in the first place. Elsewhere, I've made the point that if you think about it, pure states don't propagate top-down, they propagate bottom-up: you get a pure state when you break a piece off from a larger system and force it to satisfy certain constraints, and there's really no other way that we ever encounter pure states in quantum mechanical analysis.
 
  • #123
Q_Goest said:
The only problem I see with this is that there is nothing 'functionally meaningful' that is gained by reducing classical scale phenomena (such as Benard cells or the interactions between neurons) to quantum scale interactions.

But again, that is merely an epistemological point. We all agree (I think) that when reductionism works in the pragmatic sense of useful theories, then it works.

But what is under debate is whether the atomistic approach exemplified by FEA works when we consider complex systems such as brains that "have freewill, consciousness and the appearance at least of downward causation". Or indeed if we step back to take a systems view of physical reality itself (one that includes the quantum scale for example).

So arguing a modelling strategy works in some cases is not proving that it must work in all cases.

Unless you can make an ontological level case that the map is the terrain and reality really is just the sum of its parts. And here the observational evidence weighs heavily against you - such as the view from neuroscience and QM.
 
  • #124
Ken G said:
Interestingly, even many-worlds is not really an atomistic construction, it is about as holistic as they come. It begins and ends with accepting that the concept of a "state vector for the universe" is a meaningful and coherent construct, because if you start with that, and trust the Schroedinger equation, then you will always have it (and its encumbent many conscious worlds emerging within), and if you do not start with it, then you will never have it appear. You get out exactly what you put in, and no experiment tells you if putting it in has done anything useful for you, so it's purely a desire to apply top-down imagery that motivates many worlds in the first place. Elsewhere, I've made the point that if you think about it, pure states don't propagate top-down, they propagate bottom-up: you get a pure state when you break a piece off from a larger system and force it to satisfy certain constraints, and there's really no other way that we ever encounter pure states in quantum mechanical analysis.

I see what you mean but I think the critical question is about when does a developing QM potential encounter the global constraints that collapse it into a classical state.

Many worlds avoids the issue of this encounter because all destinies happen. So instead of a single global history, you just spawn endless micro-branches of history. Constraints break reality at a local level, atomising its history, rather than constraint being at the global level where it is more than the sum of its part and organising a single self-consistent history for reality.

Decoherence of course does not collapse the wave function but instead disperses it to a degree that it seems to have vanished as an issue - a Hopf flow model. But a systems view could fix this aspect of decoherence I believe by treating reality as a dissipative structure. The classical realm would have the downward causal power to actually collapse (that is constrain) any spreading QM degrees of freedom.

Pure state would propagate from the bottom-up on a probablistic basis because constraint is only constraint (the top-down causality is not a simple deterministic causality). All that constraint means is that the boundary conditions (such as the experimenter's set-up) exist in concrete fashion. But then the QM potential still has its remaining "internal" degrees of freedom which the constraints can never see.

Take spin for example. If I can constrain a point to a locale, I still have left open its potential to be spinning. Constraining the translational symmetries does not remove the degrees of freedom represented by the rotational symmetries.

So classical reality exists by constraining the degrees of freedom represented by the notion of a pure, or unconstrained, quantum realm. And instead of the "observer" being a mysterious device outside the system as it is in the reductionist view, the observer is clearly identified with the global constraints, the information that does the downward causation.

And this systems view also leaves room for the remaining quantum degrees of freedom that are the "weird" bit. When almost everything is being tidily regulated from the top-down, it can seem weird that the micro-scale is not actually under complete control but is still a little random.
 
  • #125
Maui said:
Just one example and i am leaving - the satellites that keep the GPS system working measure time differently, because it runs differently for observers in different referential frames. This is a fact backed up by hardcore science and thousands of expereiments. The implication of time 'flowing' differently is that your NOW has already happened(passed) in another frame of reference(e.g. that of the GPS clocks). It renders causality apparent(for some reason things seem(just seem) to have causes in the world of relativity). The other implication is that of free-will and free choice. It must also be just apparent. Add fields(the most consistent contemporary model we've build so far) and not just causality but everything observable is just excitations of a field(for some reason the excitations of the fields tend to conspire towards a seeming classical causality).

Ummmm... it seems nobody else is going to break this to you, so I will... what the hell are you talking about? I understand your example, which is not one I'd use, nor does it have ANYTHING to do with causality, and free will in this context.
 
  • #126
Q_Goest said:
Hi Pythagorean,

What is "it" you're referring to?

downward causation (in the strong sense). Isn't that what we were talking about?

I think you owe it to yourself to look at how neuronscience is treating the interactions of neurons at the level where computational modeling meets physical testing both disociated neurons and in vivo.

Considering that's what both my courses and research consist of, I don't know what you think I'm missing. As we all point out though, the model is not the reality. The models are (successfully) predicting deterministic behavior of neural networks, it doesn't pretend to be the full story. It's a story about the electrical signal carried by the neurons. We don't consider, for example, the effects of calcium-triggered quantal release in the synapse via SNARE and SNAP proteins, or the way in which the postsynaptic terminal influences the transcription factors of gene networks (neuroscience epigenetics). Many models don't even consider the complicated geometry and electrotonic properties of the dendritic processes. Many models don't consider the volume transfusion or global field effects. But they can, and especially if those behaviors are your interest.

There's still a lot we don't even know about neurotransmitter interactions in the synapse, and we're just beginning to understand the role of gap junctions in global regulation.

[...] there's nothing but local, causal interactions that create those global behaviors. That's the philosophy behind FEA* and it's the philosophy behind the compartment models used in neuroscience today.

I doubt that's any scientific field's prevailing philosophy, please provide clear evidence of this. I think this is your interpretation. Particularly that you put "nothing, but" as a qualifier. The prevailing philosophy is empiricism. To say there's nothing but local, causal interactions that create those global behaviors is a claim I've never seen. Whether it's true or not isn't important though. The interesting part of these complex systems is how the global behavior causes local interactions to occur, regardless of whether the global behavior originated from local interactions or not. We're not really interested in chicken and egg arguments in the lab; that's for philosophy forums.
 
  • #127
apeiron said:
Many worlds avoids the issue of this encounter because all destinies happen. So instead of a single global history, you just spawn endless micro-branches of history. Constraints break reality at a local level, atomising its history, rather than constraint being at the global level where it is more than the sum of its part and organising a single self-consistent history for reality.
And yet, it is really many-worlds which is holistic, and has a whole that is more than the sum of its parts. Many-worlds subordinates the physicist to the physics, and so invents a gossamer web of invisible coherences that act like glue between the islands of different worlds. The whole pure state is more than the sum of the "worlds", because the worlds lack these connections that the denizens of these worlds can never cross or even perceive. Some would call that holism on steroids, and prefer to subjugate the physics to the physicist. In that case, there is no need to restore anything that has been ruled out by the constraints of the "classical realm" (by which we both mean, observer effects). When physics is seen as the way a brain interacts with its environment, rather than the way an environment gives rise to a brain, there is no need for the concept of a unified state-- a state is merely whatever usefulness and consistency is left when all that is useless or inconsistent with constraints has been thrown out. So many-worlds is a glue, and Copenhagen is a sifter.
And this systems view also leaves room for the remaining quantum degrees of freedom that are the "weird" bit. When almost everything is being tidily regulated from the top-down, it can seem weird that the micro-scale is not actually under complete control but is still a little random.
An emerging idea is that physics, if not consciousness itself, involves that razor's edge between that which seems determined and that which seems random. That tennis match may echo your points about an interplay between bottom-up and top-down interactions.
 
  • #128
nismaratwork said:
Ummmm... it seems nobody else is going to break this to you, so I will... what the hell are you talking about? I understand your example, which is not one I'd use, nor does it have ANYTHING to do with causality, and free will in this context.



You should have probably first read my previous posts instead of yelling. Anyway, my latest post(that you quoted) was hypothetically taking into account a global scale view of reality(the so-called "God's eye view") and the role causality plays in view of relativity. Causality in the blockworld is not a fundamental feature of reality, instead - it's just an ordering of events in a causally-looking way when the observer is in a particular FOR in this Lorentz-invariant reality. As i stated earlier, this undermines the idea that things and events are really what they are because of causality(though they appear to be what they are because of causality). I stand by my words, causality very likely will not be a fundamental feature of a TOE, but just apparent/emergent(it doesn't really matter if everyone is already aware of this, but there is already a consensus on this among those who work on the foundational issues).

For causality to be fundamental, you'd need the universe of Isaac Newton. That universe is a mirage, however.


If you've been reading Luboš Motl's blog, you've probably seen where the highest bets are being placed(which confirms what apeiron and i said earlier on emergence):

http://motls.blogspot.com/2004/10/emergent-space-and-emergent-time.html
 
Last edited:
  • #129
Personally, I always get a chuckle when I see the term "fundamental" used in physics. What does that even mean? Perhaps when we look at the history of physics, we should start relating to what physics actually is rather than how we might like to imagine it. The natural conclusion is that the word "fundamental" by itself does not have meaning in physics, but "more fundamental" does. Given this, we should not be surprised that causality is not fundamental, but we can perhaps view it as "more fundamental" than a concept like space. The prevailing question of this thread is then, "which is more fundamental, causality or free will?" Or perhaps neither emerges from the other, but both emerge from something else.
 
  • #130
Ken G said:
Personally, I always get a chuckle when I see the term "fundamental" used in physics. What does that even mean? Perhaps when we look at the history of physics, we should start relating to what physics actually is rather than how we might like to imagine it. The natural conclusion is that the word "fundamental" by itself does not have meaning in physics, but "more fundamental" does. Given this, we should not be surprised that causality is not fundamental, but we can perhaps view it as "more fundamental" than a concept like space. The prevailing question of this thread is then, "which is more fundamental, causality or free will?" Or perhaps neither emerges from the other, but both emerge from something else.

Hmmm... The very seach for what we like to imagine it to be is what set Einstein on his course to take a "Heuristic" view of light. It also hobbled him later in life...

... what to conclude from that?
 
  • #131
Maui said:
You should have probably first read my previous posts instead of yelling. Anyway, my latest post(that you quoted) was hypothetically taking into account a global scale view of reality(the so-called "God's eye view") and the role causality plays in view of relativity. Causality in the blockworld is not a fundamental feature of reality, instead - it's just an ordering of events in a causally-looking way when the observer is in a particular FOR in this Lorentz-invariant reality. As i stated earlier, this undermines the idea that things and events are really what they are because of causality(though they appear to be what they are because of causality). I stand by my words, causality very likely will not be a fundamental feature of a TOE, but just apparent/emergent(it doesn't really matter if everyone is already aware of this, but there is already a consensus on this among those who work on the foundational issues).

For causality to be fundamental, you'd need the universe of Isaac Newton. That universe is a mirage, however.

I'm fairly sure that the Bohmians would take exception to your assessment, and many others at that. Moreover, any speculation about a theory that is as elusive as any is a bit absurdist given your lead-in, but... OK. I think you should take these thoughts to QM, and see how they fly there (hint... lead... brick...)


Maui said:
If you've been reading Luboš Motl's blog, you've probably seen where the highest bets are being placed(which confirms what apeiron and i said earlier on emergence):

http://motls.blogspot.com/2004/10/emergent-space-and-emergent-time.html

I agree with apeiron, but in part because he makes his argument using... arguments and references. You're making suppostions and personal speculation, and that's nothing to work with, even if I agree with your conclusions. Again, perhaps that is what Q_Goest found wanting in your post?
 
  • #132
nismaratwork said:
Hmmm... The very seach for what we like to imagine it to be is what set Einstein on his course to take a "Heuristic" view of light. It also hobbled him later in life...

... what to conclude from that?
Einstein was motivated by the search for something more fundamental, which is the highest goal of science. Why, then, do we feel the need to pretend that it was a search for something fundamental? Why can we not simply live in the truth?
 
  • #133
Ken G said:
Einstein was motivated by the search for something more fundamental, which is the highest goal of science. Why, then, do we feel the need to pretend that it was a search for something fundamental? Why can we not simply live in the truth?

Um... as much as I'm dying to say, "[We] can't HANDLE the truth!", I wont... I did.

Anyway, why?... because that search has continually borne fruit such as QM, and Relativity, and perhaps elements of String Theory. The search itself tends to drive the field forward, but then the search becoming myopic is clearly crippling.

Einstein certainly seemed to believe that there was an underlying elegance he could uncover, although it's true that he settled for "more" fundamental.

I'd add, the truth is ultimately inconsistant in the absence of a better means to join gravity and the forces described by QM.
 
  • #134
Ken G said:
But when trying to model something like free will, when will you claim success?

apeiron said:
So arguing a modeling strategy works in some cases is not proving that it must work in all cases.

Pythagorean said:
As we all point out though, the model is not the reality. The models are (successfully) predicting deterministic behavior of neural networks, it doesn't pretend to be the full story. It's a story about the electrical signal carried by the neurons. We don't consider, for example, . . . But they can, and especially if those behaviors are your interest.

I doubt that's any scientific field's prevailing philosophy, please provide clear evidence of this. I think this is your interpretation. Particularly that you put "nothing, but" as a qualifier. The prevailing philosophy is empiricism. To say there's nothing but local, causal interactions that create those global behaviors is a claim I've never seen. Whether it's true or not isn't important though. The interesting part of these complex systems is how the global behavior causes local interactions to occur, regardless of whether the global behavior originated from local interactions or not. We're not really interested in chicken and egg arguments in the lab; that's for philosophy forums.
Let’s talk about what these models are for a minute because I think the philosophy of why they are the way they are is being overlooked.

Going back to the 1950’s when computers were just starting to be used for modeling natural phenomena, the first use of those models was in aerospace where wings for example, needed to be very accurately modeled. In a landmark paper by Turner et.al., “Stiffness and Deflection Analysis of Complex Structures” (J of the Aeronautical Sciences) he talks briefly of the fundamental philosophy behind why the model is created the way it is. Remember that although he’s referring to stresses and displacements in a solid structure, the same philosophy is used for modeling fluid behavior such as for Benard cells and for modeling brain behavior by compartmental models such as the Blue Brain project as I'll show in a moment.
The analysis may be approached from two different points of view. In one case, the forces acting on the members of the structure are considered as unknown quantities. In a statically indeterminate structure [think of this as your “top down constraint”], an infinite number of such force systems exist which will satisfy the equations of equilibrium. The correct force system is then selected by satisfying the conditions of compatible deformations in the members. ...

In the other approach, the displacements of the joints in the structure are considered as unknown quantities. An infinite number of systems of mutually compatible deformations in the members are possible; the correct pattern of displacements is the one for which the equations of equilibrium are satisfied.

So what does that mean? He’s suggesting the fundamental philosophy behind all of classical mechanics. He’s pointing out that there needs to be an equilibrium condition at every point in the system. At every point, we have to have equilibrium conditions such as conservation of mass, energy, momentum, equilibrium of forces (think "free body diagrams"), etc ... This can be static equilibrium or dynamic equilibrium. The system as a whole isn’t in equilibrium unless all the parts within that system are also in equilibrium at all times.

The philosophy behind this model also points to a second fundamental premise. The forces or causal affects at every point within the system are local. Every local event is dependent on being in equilibrium, but it should also be emphasized that those equilibrium conditions are due to the local interplay of the parts. In the case of structural elements, those parts flex and create forces on neighboring elements due to their ability to compress or stretch (ie: the modulus of the material), the amount of mass and force (ie: stress) that allows for that part to accelerate, etc ...

In the Hodgkin-Huxley model of neurons, those elements of the neuron act like electrical elements such as resistors and capacitors, but the concept that they are similar to those electrical elements has nothing to do with the much more important and fundamental premise. The Hodgkin Huxley model could have used water pipes, valves and pressure vessels. In fact, electrical phenomena and fluids are highly analogous so it seems the use of electrical circuits was likely made because that is what they were familiar with and because they actually USED electrical components to test with as opposed to converting to a fluidics basis. But the model itself is inconsequential. The philosophy behind what’s going on in nature is what’s important, and for the compartmental models of the brain, regardless of what kind of parts (ex: resistors and wires or valves and pipes) we use to model neurons with, no matter what kind of mathematics we decide on, or what equations we use, there will always be the same philosophical premise that is identical for ALL of those models. That premise is twofold as described by Turner.

1. Static and/or dynamic equilibrium conditions must exist between every point in the system.
2. These equilibrium conditions are not affected by nonlocal causes, they are affected only by those conditions that are immediately local to the affected point in the system.​
One might now question whether this is REALLY a fundamental philosophical notion or is it just how science models nature in general. I’m sure that’s what must come through anyone’s mind as they consider the above. However, it is clear that discussions around emergence have popped up to address this. Bedau (“Weak Emergence” Philosophical Perspectives) also recognizes this when he refers to weakly emergent phenomena such as described for example by cellular automata:
The phrase “derivation by simulation” might seem to suggest that weak emergence applies only to what we normally think of as simulations, but this is a mistake. Weak emergence also applies directly to natural systems, whether or not anyone constructs a model or simulation of them. A derivation by simulation involves the temporal iteration of the spatial aggregation of local causal interactions among micro elements. That is, it involves the local causal processes by which micro interactions give rise to macro phenomena. The notion clearly applies to natural systems as well as computer models. So-called “agent-based” or “individual based” or “bottom up” simulations in complexity science have exactly this form. They explicitly represent micro micro interactions, with the aim of seeing what implicit macro phenomena are produced when the micro interactions are aggregated over space and iterated over time. My phrase “derivation by simulation” is a technical expression that refers to temporal iteration of the spatial aggregation of such local micro interactions.

That really is how science and philosophy view the world. There is no room for “downward causes” that interact at the local level, that might force the system to change in a way that isn’t predictable from examining the local causal events. I will only make this caveat that these models are how classical mechanical interactions are made, and not how quantum mechanical interactions are viewed. That is a different issue and isn’t of relevance here which is the ONLY reason I’m not going into that issue. If this above description of classical mechanics and the philosophy behind it isn’t clear and understood, going into the QM description is only going to confound things. Neurons interact at a classical scale, there are no quantum mechanical interactions between them that might lead to nonlocal causes arising from ‘top down constraints’. If/when such a phenomena is shown to be pertinent to how a brain works, that’s fine. But for now, no one should be bringing quantum mechanics up as a reason to believe there's room for downward causation or any similar concept that might be put into different terms such as 'top-down' constraints. Such terms are misleading since such things as top down can mean how a door hinge makes the door rotate only around some given axis which certainly isn't any kind of downward causation as referred to in philosophy of science.

One might now ask if downward causation (sometimes referred to as “strong downward causation”) has anything left to do with any of this? If events are local only as indicated above, how can global states intervene in the local events? Strong downward causation addresses this and suggests that “macro-causal powers have effects at both macro and micro levels, and macro-to-micro effects are termed downward causation.” (Bedau). Emmeche et al describes it as “a given entity or process on a given level may causally inflict changes or effects on entities or processes on a lower level”. These authors and many more including the paper by Farkas mentioned by apeiron all dismiss strong downward causation as crackpot science. There’s no evidence for it and there’s no support for it. For future reference, please don’t point to crackpot web sites that suggest strong downward causation is some kind of debatable concept. If a paper suggests there is some kind of top down constraint, it doesn’t necessarily refer to strong downward causation. The concept of strong downward causation needs to go back into the closet it came out of, as does any similar argument that suggests micro level causes are commandeered to act differently in different systems.
 
  • #135
nismaratwork said:
Um... as much as I'm dying to say, "[We] can't HANDLE the truth!", I wont... I did.
:D
Anyway, why?... because that search has continually borne fruit such as QM, and Relativity, and perhaps elements of String Theory. The search itself tends to drive the field forward, but then the search becoming myopic is clearly crippling.
The search doesn't require adopting a belief system, it just involves doing science. The only required faith is the faith in the process, not the faith in the outcome. In fact, I would argue that faith in the outcome is what tends to close our minds to future advances, rather than the opposite. The same might be relevant to upwardly-causal approaches to free will.
Einstein certainly seemed to believe that there was an underlying elegance he could uncover, although it's true that he settled for "more" fundamental.
And indeed, elegance is certainly part of what we are seeking. We should look for elegance, and be happy when we find it, but not be seduced by it.

I'd add, the truth is ultimately inconsistant in the absence of a better means to join gravity and the forces described by QM.
Right, and although it is natural to always try to find more fundamental unifications that correct inconsistencies, it is unnatural to expect that physics will ever be absent of inconsistencies. Never was, why do we imagine it ever will be?
 
  • #136
Ken G said:
:D
The search doesn't require adopting a belief system, it just involves doing science. The only required faith is the faith in the process, not the faith in the outcome. In fact, I would argue that faith in the outcome is what tends to close our minds to future advances, rather than the opposite. The same might be relevant to upwardly-causal approaches to free will.
And indeed, elegance is certainly part of what we are seeking. We should look for elegance, and be happy when we find it, but not be seduced by it.

I agree.


Ken G said:
Right, and although it is natural to always try to find more fundamental unifications that correct inconsistencies, it is unnatural to expect that physics will ever be absent of inconsistencies. Never was, why do we imagine it ever will be?

I don't know... I think people expect the universe to conform to an anthropic view.
 
  • #137
nismaratwork said:
I don't know... I think people expect the universe to conform to an anthropic view.
Right, and we always think the ancient Greeks were so naive to put the Earth at the center! Still haven't learned that lesson.
 
  • #138
Ken G said:
Right, and we always think the ancient Greeks were so naive to put the Earth at the center! Still haven't learned that lesson.

I think it's amazing that we've come so far based on such limited personal experiences... it's a testament to the scientific method IMO.
 
  • #139
Agreed. Now the question is, where does the scientific method lead into the study of free will, and do we need a few new tricks?
 
  • #140
Ken G said:
Agreed. Now the question is, where does the scientific method lead into the study of free will, and do we need a few new tricks?

I'm stumped when it comes to that application... I feel as though we're trying to visualize something based on concepts like the mind and consciousness, which are not well defined. Well, they tend to be somewhat fluid at least, and each new discovery seems to raise questions in the philosophical arena, not resolve them.
 
  • #141
Hi Ken G,

The topic of http://www.iep.utm.edu/freewill/" . The theory wants to mark causal interaction as possible, but could it be? Can we have downward causation, when we speak about token identity theories? Can we have even any mental causation?

If we want a mental event M1 to cause a physical event P2 and if we want the causal status of the mental to derive from the causal status of its physical realizer P1 (so that the theory doesn't fall in the substance dualist category) we are faced with over-determination (P2 could be realized by M1, as well as by P1 alone). If there are no greater causal powers that magically emerge at the higher level of M1 (if we want the theory to stay a materialistic one) then the causal powers of M1 are identical to the causal powers of P1, which means that P1 is the only realizer of P2, thus M1 becomes epiphenomenal. You can read more about this http://www.iep.utm.edu/mult-rea/#H4".

So, in the materialistic view you can either have mental causation identical with the physical causation or you can embrace epiphenomenalism and qualia. In both ways free will is impossible. If you want to find free will, you must seek it outside the materialistic domain.

Q_Goest,

In your post https://www.physicsforums.com/showpost.php?p=3179362&postcount=90" you say you don't believe in the phenomenal-physical correlation and basically you reject epiphenomenalism. And at first it doesn't looks logic, how can one make a knowledge claim about consciousness if it's epiphenomenal? But does the agent's association of the conscious experience of some event and its labeled state in the brain contradict in any way? The definition of the word "consciousness" in the brain state is not associated with the experience of it, but does this interfere the brain to be able to label certain physical state? Think about it, how will you explain the word "consciousness" to a little boy and what association does his brain make. For me epiphenomenalism implies that in exactly every millisecond your brain takes the optimal decision based on the available information. Even when you do something anti-evolutionary (take a lot of drugs, commit a suicide) it must be somehow justified in your brain calculations. Because if it's not, epiphenomenalism is wrong (remember you don't have taken the drug because YOU liked it, but because your BRAIN liked it).
 
Last edited by a moderator:
  • #142
Q_Goest said:
...

You can go back farther than computers. The computer is basically just a glorified calculator used to solve differential equations that we can't solve by hand (or we could... but it would take hours and pages where the computer can do it in seconds and kilobytes).

But think about this: let's say you have some giant system of N differential equations to describe the whole universe. You have every single interaction reduced to a handful of variables. Now all you need to do is put in your initial conditions for those variables.

What do you do? Your theory already accounts for everything in the universe, yet your theory doesn't account for how the initial conditions arose. Do you make the initial conditions a function of some part of the system? So now there was always this loop and never a beginning or end? I'm puzzled, personally, I have no idea what I'd do.

Anyway, I'm hoping this demonstrates that the science and the philosophy are completely different, just like models and reality. As another examples, we know that quantum mechanics underlies all classical observations, yet we naively model things in the old classical view. Why? Because it's effective, it's productive, it works. This is not the same way I approach the problem in a philosophical setting.
 
  • #143
Pythagorean said:
You can go back farther than computers. The computer is basically just a glorified calculator used to solve differential equations that we can't solve by hand (or we could... but it would take hours and pages where the computer can do it in seconds and kilobytes).

But think about this: let's say you have some giant system of N differential equations to describe the whole universe. You have every single interaction reduced to a handful of variables. Now all you need to do is put in your initial conditions for those variables.

What do you do? Your theory already accounts for everything in the universe, yet your theory doesn't account for how the initial conditions arose. Do you make the initial conditions a function of some part of the system? So now there was always this loop and never a beginning or end? I'm puzzled, personally, I have no idea what I'd do.

Anyway, I'm hoping this demonstrates that the science and the philosophy are completely different, just like models and reality. As another examples, we know that quantum mechanics underlies all classical observations, yet we naively model things in the old classical view. Why? Because it's effective, it's productive, it works. This is not the same way I approach the problem in a philosophical setting.

A "shut up and calculate" philsopher? If you weren't so clearly a dude, I'd be in love. :biggrin:
(not sarcasm)
 
  • #144
Ken G said:
And yet, it is really many-worlds which is holistic, and has a whole that is more than the sum of its parts.

I would still argue not as many worlds is exactly the sum of its parts. Every locally forking history accumulates without any constraint. Holism would require that the local freedom to branch would be restricted so the system only manifested some paths and not all of them.

By contrast in QM views of causality, a Feynman sum over histories approach to collapse is holistic as all paths "happen" but then there is a global constraint to some single self-consistent event.
 
  • #145
Ken G said:
Personally, I always get a chuckle when I see the term "fundamental" used in physics. What does that even mean? Perhaps when we look at the history of physics, we should start relating to what physics actually is rather than how we might like to imagine it. The natural conclusion is that the word "fundamental" by itself does not have meaning in physics, but "more fundamental" does. Given this, we should not be surprised that causality is not fundamental, but we can perhaps view it as "more fundamental" than a concept like space. The prevailing question of this thread is then, "which is more fundamental, causality or free will?" Or perhaps neither emerges from the other, but both emerge from something else.

Causality is of course fundamental and freewill as near epiphenomenal as you can get :smile:. Causality would be our general or universal model of why anything happens (why even existence happens), while freewill is just some vanishingly rare, relatively impotent on the cosmic scale, feature of a complex system.

In philosophy, the fundamental is the general. In physics, it is generally taken to be the smallest scale - which is why atomistic reductionism is the driving idea.

And when it comes to identify these general principles or universals, philosophy finds that they are always dichotomies or complementary/synergistic/asymmetric pairs.

So as well as the local, there is the global. As well as the discrete, there is the continuous. As well as flux, there is stasis. As well as chance, there is necessity, etc.

Which is why it is no surprise that causality itself is dualised. As well as bottom-up construction, there is top-down constraint.
 
  • #146
Q_Goest said:
Let’s talk about what these models are for a minute because I think the philosophy of why they are the way they are is being overlooked.

It's great that you are willing to get into the details of a defence of your view. And my reply is that you are missing the wood for the trees :cool:.

What you are highlighting here is simply the fact that allowing a system to go to global equilbrium allows you then quite properly to drop the global causes from your model because now you are only interested in what can change - the local variables, the local fluctuations, the local events. This is what reductionist modelling is all about.

It is right there in Newton's three laws of motion. The first two laws atomised the notion of local action into a force and a mass. Mass could have intrinsic motion which was inertial, and that made any globally observable change in motion the result of an atomistic force (a force vector).

So already in the first two laws, Newton's great reductionist simplification was to equilibrate away the global spacetime backdrop. Taken the greek atomist's notion of the void, he said the background exists, but it is causally inert. It is simply an equilbrated or unchanging stage upon which there is a localised play of atoms - atoms of mass and atoms of force.

Then to make this highly reduced view of reality fly, he had to introduce his third law of action~reaction. For every forceful action, there is an equal and opposite forceful reaction - a little matching localised anti-vector.

Patently the reaction vector is not actually a symmetric entity. Instead it sums up all the contextual constraints that are found to impinge on the locale. If you push against the wall, then it is not just several square inches of wall that pushes back. It is the building, the planet to which it is attached, the gravity fields which affect the planet, etc.

The third law is the local equilbrium correction! The first two laws removed the generalised background and the third quietly accounts for any disturbances of the global state by localising it to another linear and atomistic event - a reaction vector.

So this is the "philosophy" of physics - or at least the highly successful modelling strategy on which all mechanical thinking is based. Equilibrate away the global causes, the context that constrains, and you can then just describe reality in terms of local atomistic entities and local forceful changes. Just treat reality as a collection of actions happening in a mute void.

Now FEA just repeats the same exercise. If you can't equilibrate away the whole global story at once, then break the job up into a suitably grained set of compartments. Create localised equilibration stories that add up with suitably low error to give you a globally equilibrated model.

Does this then say that global downward acting constraints don't exist? Or that reductionist modelling finds ever more clever ways around them?

Now this thread was about the neurology of freewill. (Not modelling neurons with FEA).

The kind of systems that FEA is suitable for modelling is stuff like fluid dynamics. This is the non-living world where global constraints are holonomic. We are safe to presume the constraints or boundary conditions are at equilbrium and unchanging. Locally the aircraft wing may be subject to some complexity due to emergent turbulent features. But generally temperature, pressure, material strengths, viscosity, are a stable backdrop to the model.

There is not a local~global interaction so that for example the flex of the wing causes a tropical storm that sends a bolt of lightning that changes the material strength of the wing, or even just causes a dramatic pressure drop in the vicinity of the wing. No, the FEA analysis rules out interactions across scale by choice.

But for living systems, we are now talking about systems that have non-holonomic constraints. They do have the informational machinery (such as genes, words, membranes, action potentials, etc) to control their own boundary conditions or downwards acting constraints.

So to model living systems, we have to model that ability to change the global constraints - for meaningful reasons. Which is why I keep challenging you to reply to the literature on top-down selective attention and its power to reshape local neural receptive fields.

You would rather keep the discussion focused on the most reductionist models of single neurons that you can find. And yes, you can take what a receptor pore does and model it as an isolated mechanical device sitting in a stable equilibrium world utterly unlike the real world of a receptor pore. It will tell you something about the local degrees of freedom that the device might have. But it cannot then tell you anything about the kinds of global constraints that act on those degrees of freedom. You literally cannot in principle see them.

Now you can do a Blue Brain exercise and throw a lot of devices together and simulate - see what kind of global organisation arises to constrain a network of artificial neurons. If you have built your simulation with local components that can change their behaviour (as is familiar with neural nets with nodes that can adapt their local weights), then you can start to get a realistic development of local~global interactions.

But a simulation is NOT a model. The results you are celebrating are the observable output, not an axiomatic input. You are demonstrating an effect, not a cause.

A proper model in this context would be one where you have a handle on both the bottom-up and top-down sources of causality and so can compute the outcomes directly - predict the observable state rather than merely discover it post-hoc.

So this is why systems modelling is different from reductionist modelling. Reductionism wants to deal only in local causation (and so finds ways to equilibrate away any global effects to make them a "void" - an unchanging backdrop). Systems modelling recognises that global constraints can be an active part of the mix and so seeks to include them in the model.

This is of course very difficult to do as yet. In fact it could be another 20 to 30 years before we have the real breakthroughs in this area. Everyone thought fractals, chaos theory and non-linear dynamics was some kind of mathematical modelling revolution. But that was just a first ripple of the change that could come.
 
  • #147
apeiron said:
I would still argue not as many worlds is exactly the sum of its parts. Every locally forking history accumulates without any constraint. Holism would require that the local freedom to branch would be restricted so the system only manifested some paths and not all of them.
It appears we have a different idea of the meaning of "holism" as it pertains to quantum mechanics, so this might be interesting to delve into. To me, the quintessential example of holism is the violation of the Bell inequality. This requires correlations of a specific type, i.e., not the type of correlations we have classically-- instead, it requires a concept of a joint wave function, in effect. The simplest example there is a Bell state, like |A>|a> + |B>|b>, where capital letters are for one part of the system and small for the other, and A and B are two different outcomes. Classically, there's just three things there, A-like aspects, B-like aspects, and the probabilistic combination thereof, but the Bell state allows algebraic correlations that are also part of the state, so the state has four elements-- there's an extra element which is the algebraic consequences of the combination (the phase coherences), and that's what unifies the A-like and B-like aspects into a single whole, even when A and B are incompatible outcomes.

Seeing a macro reality as if it were akin to a Bell state is what many-worlds does-- Copenhagen treats the macro reality like the classical state, where the probabilistic element reflects our lack of information rather than something that is really true about the reality. In many worlds, the algebraic (holistic) combination is a legitimate aspect of the reality (it's one that we the observer are never privy to, because the phase coherences are of such a complex and intractable nature that we never see their consequences in any experimental outcome-- our degrees of freedom are limited to an "island" of coherent phases, the boundaries of which are unapproachable by the way we interact with our reality). In short, in Copenhagen, the observer effects are what determine reality, and in many-worlds, the observer effects are a prison that prevents us from seeing the reality. That's why I see the difference as which one subordinates to the other: the physics or the physicist. And the meaning of "holism" is also dependent on this choice-- if we subordinate the physicist to the physics, then what is "whole" is the mathematical concept, the state vector with all its invisible phase correlations that we cannot be affected by because they don't fit into our coherent subset. If we subordinate the physics to the physicist, then what is "whole" is the consistent history that physicist will use to describe their reality, and the "many worlds" seem fragmented.
By contrast in QM views of causality, a Feynman sum over histories approach to collapse is holistic as all paths "happen" but then there is a global constraint to some single self-consistent event.
Yes, you take the Bohr view that the observer effects are global constraints that reality must satisfy-- many worlds drops that requirement. I prefer the pragmatism of the Bohr perspective, but I think that when we take the pragmatic approach, we don't really have a holistic perspective left, because we cannot say that only a single "state of reality" is consistent with the global constraints-- there may be many states that are equally consistent, at the level of precision by which those constraints can be defined.
 
  • #148
apeiron said:
Causality is of course fundamental and freewill as near epiphenomenal as you can get :smile:. Causality would be our general or universal model of why anything happens (why even existence happens), while freewill is just some vanishingly rare, relatively impotent on the cosmic scale, feature of a complex system.


Your certainty isn't warranted.

It doesn't really matter how convincingly well the 4 fields are able to mimic the existence of solid objects with causal relations between them. They are not. And this is a truly tremendous point for philosophy, as by far the most consistent model of reality we have today(it's also the only consistent one) is that of fields and the field intensity that represents the probability that some classically looking(causality preserving) event will take place somewhere. If you think you understand the reality via your models- you don't. If you hope to understand the future TOE(the master equation) - you won't. Causality is just another part of the human baggage and will play a secondary role(or no apparent role) in a complex, self-consistent mathematical scheme(TOE).

Another ponit would be that if causality were so important and fundamental, we'd have figured out by now why the Schroedinger equation works as it does and probabilities would give way to certainty.

The universe isn't classical(this is certain), it's quantum and it looks classical under specific circumstances but in others, the classical universe worldview is totally inconsistent and can't explain a whol;e miriad of phenomena that the quantum worldview can. In the other thread in the qunatum forum someone is asking how two solid bodies can actually 'touch'. Go explain that in classical terms when the whole mindset that prompted the question is completely false.
 
Last edited:
  • #149
Maui said:
Your certainty isn't warranted.

But what I actually said was that even causality is something we "just model". Which is why both the reductionist model, and the systems model, could be "right" - each effective for their purposes, or within their domains.
 
  • #150
Ferris_bg said:
So, in the materialistic view you can either have mental causation identical with the physical causation or you can embrace epiphenomenalism and qualia. In both ways free will is impossible. If you want to find free will, you must seek it outside the materialistic domain.
Hello Ferris_bg, and welcome to the dialog. Personally I have no difficulty rejecting physicalism, it strikes me as weak logic. It seems to basically follow the path "because physical models have given us excellent predictive power, we will embrace the idea that the universe is physical." That is far from a syllogisim! Note also that when I say "physical" or "material", what I really mean (and what I would claim others really mean) is "physical models" or "material models", for the very use of the term invokes models. If one is not talking about models, then the terms "physical", "material", or "reality" have no distinctions, so add no content. Whatever is, is, and the labels we hang on it is of no consequence unless those labels characterize our models of it.
 

Similar threads

  • · Replies 190 ·
7
Replies
190
Views
16K
  • · Replies 14 ·
Replies
14
Views
7K
  • · Replies 199 ·
7
Replies
199
Views
35K
Replies
14
Views
6K
Replies
11
Views
4K
  • · Replies 95 ·
4
Replies
95
Views
12K
  • · Replies 2 ·
Replies
2
Views
469
  • · Replies 60 ·
3
Replies
60
Views
9K
Replies
5
Views
3K
  • · Replies 27 ·
Replies
27
Views
7K