Neural correlates of free will

In summary, Benjamin Libet's work suggests that our decisions to act occur before our conscious awareness of them. This problem for the idea of free will is that it seems to imply an either/or battle between determinism and free will. Some people might try adopting the approach that the neurological correlates of free will are deterministic (if one does wish to adopt a kind of dualistic picture where all that is physical is deterministic and free will is housed in some extra-physical seat of conscious choice). Others might look critically at the very assumption that physically identifiable processes are deterministic in some "absolutely true" way, such that they could preclude a concept of free will.
  • #106
Maui said:
The fundamental 'stuff' that everything is supposed to emerge from is still missing. For more than 100 years scientists have been failing to identify anything that resembles fundamental building blocks from which matter, time and space emerge(i reject strings and loops as idle speculation at this time and wavefunctions and Hilbert spaces as too ill-defined and ambigous mathematical tricks).

And I agree. I said that even local stuff (substance, matter, atoms) would be emergent. That is why a logic of vagueness is required here.

The belief in elemental building blocks is precisely what I have been arguing against.
 
Physics news on Phys.org
  • #107
A useful book on the complexity theory view of free will is Nancey Murphy and Warren Brown's, Did My Neurons Make Me Do It?

Murphy gives a summary of some of her arguments here...
http://www.metanexus.net/magazine/tabid/68/id/10865/Default.aspx

The topic of downward causation (and its opposite, causal reductionism) is an interesting one in its own right. But it would also be an interesting topic from the point of view of the sociology of knowledge. What I mean by this is, first, there are many ardent reductionists among philosophers and scientists, and I would state their position not in terms of “I have good grounds for this thesis,” but rather: “I can’t imagine how reductionism can fail to be true.” On the other hand, one can do a literature search in psychology and cognitive neuroscience and find hundreds of references to downward causation. Presumably these scientists would not use the term if they thought there was anything controversial about it.
 
  • #108
apeiron said:
Yes, I am sure there is no way to change your mind here.
Thanks apeiron. That's probably the one thing we'll always agree on! lol
But anyway, boundary conditions would be another name for global constraints of course.

Immediately, when challenged, you think about the way those boundary conditions can be changed without creating a change. Which defeats the whole purpose. The person making the change is not factored into your model as a boundary condition. And you started with a system already at equilibrium with its boundary conditions and found a way to move them so as not to change anything. (Well, expand the boundary too fast and it would cool and the cells would fall apart - but your imagination has already found a way not to have that happen because I am sure your experimenter has skillful control and does the job so smoothly that the cells never get destabilised).

So FEA as a perspective may see no global constraints. Which is no problem for certain classes of modelling, a big problem if you are using it as the worldview that motivates your philosophical arguments here.

And as I said, a big problem even if you just want to model complex systems such as life and mind.

As asked, I provided examples of how top-down constraints such as selective attention have been shown to alter local neural receptive fields and other aspects of their behaviour. You have yet to explain how this fits with your FEA perspective where this kind of hierarchical causality does not appear to exist.
I don't see anything in those papers that would seriously suggest there are something like "top down constraints" that influence local causation. If conservation principals (conservation of mass, energy, momentum) are valid at every level, there is no room for downward causation, top-down constraints or any other uber, super premium level forces influencing local causation. It's all just weak emergence, and that's all we're entitled to (per Bedau).
 
  • #109
Hi Pythagorean,
Pythagorean said:
Yes, I've seen the definitions, but my point was I guess, that I stand along side the people that think it's "magic". It seems rather mystical to me, which means either I don't understand it or it's bs. I chose to say I didn't understand it, I didn't mean that I didn't know the definition.
What is "it" you're referring to?

I can definitely accept that there's global behavior that doesn't occur at smaller scales (a water molecule does not manifest a wave).
I accept there's "global behavior" as well, just as Benard cells for example exhibit a higher level behavior that "doesn't occur at smaller scales". But that doesn't mean there's a global orchestra conductor or any kind of functionally relavant top down constraints that alter what physically occurs at a lower level. Weakly emergent systems (as defined by Bedau for example) such as The Game of Life exhibit similar "global" behaviors. However, there's nothing but local, causal interactions that create those global behaviors. That's the philosophy behind FEA* and it's the philosophy behind the compartment models used in neuroscience today. Programs like NEURON and Genesis and the Blue Brain project demonstrate exactly the type of behavior we would expect given nothing but the weakly emergent rules set up by compartment models. There's no need for additional, higher level physical laws that somehow crowd out or take over the lower level laws. In fact, there are no such laws. When we talk about levels in nature, we're not talking about higher level laws, we're merely talking about the weakly emergent regularities that emerge from those lower level physical laws.

Bedau's paper regarding weak emergence can be found on the web here:
http://www.google.com/search?hl=en&source=hp&q=bedau+weak+emergence&aq=f&aqi=&aql=&oq=&safe=active

I think you owe it to yourself to look at how neuronscience is treating the interactions of neurons at the level where computational modeling meets physical testing both disociated neurons and in vivo. There's an interesting talk by Henry Marlram on TED describing in broad terms how they're doing this on the Blue Brain project. He doesn't come out and explicitly state how they use the compartment method but that's what's being done. I have another link that discusses how the Blue Brain project is using this method in my previous post (the one with the picture).
http://www.ted.com/talks/henry_markram_supercomputing_the_brain_s_secrets.html

*Note: FEA also models dissipative and nonlinear systems.
 
  • #110
Q_Goest said:
I accept there's "global behavior" as well, just as Benard cells for example exhibit a higher level behavior that "doesn't occur at smaller scales". But that doesn't mean there's a global orchestra conductor or any kind of functionally relavant top down constraints that alter what physically occurs at a lower level. Weakly emergent systems (as defined by Bedau for example) such as The Game of Life exhibit similar "global" behaviors. However, there's nothing but local, causal interactions that create those global behaviors.



Local, causal interactions between what? To be certain that your worldview holds, you have to get to the bottom of it. Let's see what the biggest names in physics have come up with so far:

1. Non-local relativistic and deterministic wave structures(though 'relativistic' undermines the whole idea of cause and its effect)
2. Abstract fields
3. Your own fantasy
4. Our collective fantasy
5. Strings
6. Loops
7. Add anything you like
 
  • #111
Maui said:
Local, causal interactions between what? To be certain that your worldview holds, you have to get to the bottom of it. Let's see what the biggest names in physics have come up with so far:

1. Non-local relativistic and deterministic wave structures(though 'relativistic' undermines the whole idea of cause and its effect)
2. Abstract fields
3. Your own fantasy
4. Our collective fantasy
5. Strings
6. Loops
7. Add anything you like
Let's not get silly. Local causal interactions are well understood and modeled mathematically by engineers and scientists. Examples being the Navier Stokes equations, Hooke's law, etc... I really don't understand why that's a problem.
 
  • #112
I think Maui's point is that even local causal interactions do not form a fully consistent ontology of the situation. So he/she might be saying that not only is there the issue that local causal interactions might be incomplete in regard to additional top-down mechanisms of the type apeiron is raising, it's even worse-- they are internally inconsistent in that they require we adopt an incomplete ontology even as far as local causal interactions go. This relates to the difference between computed trajectories and "true" trajectories, which raises the difference in the ontology of practical calculations versus what reality itself, even if we imagine it actually is deterministic, is doing. This seems a small issue, because the computed trajectories should mimic the "correct" ones in a statistical way, but there's always the chance that something underlying is going on that doesn't show up until you look for the right kinds of correlations.
 
  • #113
Ken G said:
I think Maui's point is that even local causal interactions do not form a fully consistent ontology of the situation. So he/she might be saying that not only is there the issue that local causal interactions might be incomplete in regard to additional top-down mechanisms of the type apeiron is raising, it's even worse-- they are internally inconsistent in that they require we adopt an incomplete ontology even as far as local causal interactions go. This relates to the difference between computed trajectories and "true" trajectories, which raises the difference in the ontology of practical calculations versus what reality itself, even if we imagine it actually is deterministic, is doing. This seems a small issue, because the computed trajectories should mimic the "correct" ones in a statistical way, but there's always the chance that something underlying is going on that doesn't show up until you look for the right kinds of correlations.
The only problem I see with this is that there is nothing 'functionally meaningful' that is gained by reducing classical scale phenomena (such as Benard cells or the interactions between neurons) to quantum scale interactions. By functionally meaningful, I mean that we choose to use classical descriptions because they account for the statistical agregate of all the particles (molecules) and there is nothing else in the configuration of those particles that can influence the system in a way that matters. No one is arguing that the classical description of the world is an exact one. The point is only that reducing a system of Benard cells or brain cells to the level of particle interactions doesn't gain us anything when we talk about the overall phenomena being studied.

In fact, the "dynamic systems" approach would agree with this. That approach holds that there are classical scale interactions and "global constraints" that don't need to be reduced to the quantum scale. The emergent structures emerge BECAUSE of the classical scale interactions (ie: nonlinear ones, etc...). Not that I agree with that approach.
 
  • #114
Q_Goest said:
The only problem I see with this is that there is nothing 'functionally meaningful' that is gained by reducing classical scale phenomena (such as Benard cells or the interactions between neurons) to quantum scale interactions. By functionally meaningful, I mean that we choose to use classical descriptions because they account for the statistical agregate of all the particles (molecules) and there is nothing else in the configuration of those particles that can influence the system in a way that matters. No one is arguing that the classical description of the world is an exact one. The point is only that reducing a system of Benard cells or brain cells to the level of particle interactions doesn't gain us anything when we talk about the overall phenomena being studied.

In fact, the "dynamic systems" approach would agree with this. That approach holds that there are classical scale interactions and "global constraints" that don't need to be reduced to the quantum scale. The emergent structures emerge BECAUSE of the classical scale interactions (ie: nonlinear ones, etc...). Not that I agree with that approach.



If one is certain that causality will hold after the foundational issues are solved, then one hasn't poked deep enough.

There are obvious problems with causality related to the implications of relativity as well - causality doesn't exist except as a description of a serious of seeming causally related events. No need to stick your head in the sand - if our knowledge of the world is in trouble, there is nothing to lose by figuring it out, as we are all here for the truth, whatever that may be. For all i have been exposed to, and the opinions I've seen on the subject by researchers working on the foundations of physics, causality is not fundamental and all of our knowledge of the world is incomplete, if not drastically false.
 
Last edited:
  • #115
Q_Goest said:
Let's not get silly. Local causal interactions are well understood and modeled mathematically by engineers and scientists. Examples being the Navier Stokes equations, Hooke's law, etc... I really don't understand why that's a problem.



The problem is(and it's very deep) what those local causal interactions are between. The ontology of Newton is wrong and doesn't work. What does work, does not always favor causality and when it does, it involves more magic than weak and strong emergence...

Your model, whatever that may be, is just a model. It's neither reality, nor HOW reality is. It's awefully easy for anyone to shoot it down, as reality is much weirder than human imagination would accommodate. In philosophy, we are seeking(at least striving towards) truths and complete ontologies and naive models are always easiest to demolish. As one of the great thinkers once said - if your theory is not crazy enough, there is no hope for it.
 
Last edited:
  • #116
Maui said:
The problem is(and it's very deep) what those local causal interactions are between. The ontology of Newton is wrong and doesn't work. What does work, does not always favor causality and when it does, it involves more magic than weak and strong emergence...

Your model, whatever that may be, is just a model. It's neither reality, nor HOW reality is. It's awefully easy for anyone to shoot it down, as reality is much weirder than human imagination would accommodate. In philosophy, we are seeking(at least striving towards) truths and complete ontologies and naive models are always easiest to demolish. As one of the great thinkers once said - if your theory is not crazy enough, there is no hope for it.
Sorry, I really don't see anything deep here. Let's just say we disagree and leave it at that. :(
 
  • #117
Q_Goest said:
Sorry, I really don't see anything deep here. Let's just say we disagree and leave it at that. :(


Just one example and i am leaving - the satellites that keep the GPS system working measure time differently, because it runs differently for observers in different referential frames. This is a fact backed up by hardcore science and thousands of expereiments. The implication of time 'flowing' differently is that your NOW has already happened(passed) in another frame of reference(e.g. that of the GPS clocks). It renders causality apparent(for some reason things seem(just seem) to have causes in the world of relativity). The other implication is that of free-will and free choice. It must also be just apparent. Add fields(the most consistent contemporary model we've build so far) and not just causality but everything observable is just excitations of a field(for some reason the excitations of the fields tend to conspire towards a seeming classical causality).
 
Last edited:
  • #118
Q_Goest said:
By functionally meaningful, I mean that we choose to use classical descriptions because they account for the statistical agregate of all the particles (molecules) and there is nothing else in the configuration of those particles that can influence the system in a way that matters.
But this is just the issue-- how do we know a priori what is an "influence that matters"? You start by choosing what will matter to you, and this will then motivate the models you create, and when you will declare success. But when trying to model something like free will, when will you claim success? It seems very possible that when the only influences that matter are the ones that achieve gross bulk statistical behaviors, expressed within preconditioned degrees of freedom (to borrow from apeiron's language) and boundary conditions, you will not learn what kinds of special correlations might lead to quantitatively different outcomes, and will not know if you have succeeded because you might not even be trying to model the right things to get free will. I'm not saying you know you'll fail, I'm saying you can't know you'll succeed, and there might be reason, for some at least, to suspect you will fail.
In fact, the "dynamic systems" approach would agree with this. That approach holds that there are classical scale interactions and "global constraints" that don't need to be reduced to the quantum scale. The emergent structures emerge BECAUSE of the classical scale interactions (ie: nonlinear ones, etc...). Not that I agree with that approach.
For a second I had to check this wasn't coming from apeiron! I'm not saying we need to connect to the quantum scale, I view the quantum scale as simply an example of the kinds of unexpected correlations that emerge only when you know what to look for. For example, in a quantum erasure experiment, there is no hint in the raw data that any correlations exist there, they are embedded in the entanglements in ways that require clevel manipulation to extract. If someone happened to do a similar experiment prior to the days of quantum mechanics, they would have no idea whatsoever that they were missing anything using a classical mixed-state analysis-- and they might be tempted to use language about their assessment of their success that is similar to yours in the context of nonlinearly coupled systems.
 
  • #119
Q_Goest said:
I don't see anything in those papers that would seriously suggest there are something like "top down constraints" that influence local causation.

Yes, but you never actually made the argument against what has been said, only stated that you "don't see it".

You asked for the evidence with regards to the brain and I supplied references about top-down constraints shaping neural receptive fields. That evidence still stands because you have made no arguments against it.

Perhaps you don't realize how critical this is. The supervenience view of emergence says that given the same fixed set of micro-causes, we must always logically expect the same macro-outcome. So therefore, even if there appears to be top-down effects, these are merely supervenient on the micro-causes.

But if the micro-causes can be shown to be not fixed, but instead shaped actively by downwards causation, then supervenience fails. Instead the systems view rules.

So when presented with evidence of global constraints shaping the micro-causes - attention and expectation changing local neural firing characteristics - you have to explain why this is not what it appears to the neuroscientists who have done the studies.

Saying you don't see it is neither here nor there.
 
  • #120
There are two themes I see emerging in apeiron's comments that I find intriguing. I don't have the expertise to recognize how critical they are for free will, but neither can I reject the possibility that they may be critical. Those themes have the flavor of a kind of "balancing act" or "tennis match", choose your metaphor, between different types of phenomena, which achieve greater power and richness by virtue of the interplay than they would have on their own.

One of those dichotomies of phenomena concerns microcauses vs. system-wide self-organization of the constraints/boundary conditions/degrees of freedom that affect the microcauses. This seems to allow a type of information exchange between the two aspects of the combined behavior, perhaps akin to how a brain interacts with its sources of perception. A brain with no perception is not a brain because it is not functioning like a brain, and a perception with no brain is not a perception because it is not being perceived.

The other dichotomy concerns deterministic vs. random behavior, or predictable and unpredictable if you prefer. I tended to see those as separate issues from free will, because they are concepts that relate to different kinds of questions, but apeiron's comments have suggested value in looking at the "razor's edge" between them, where "too much determinism" is the difference between a machine and something mentally alive, and "too much randomness" is the difference between the weather and something that can think. Perhaps an important element in the tennis match of the first dichotomy is maintaining the necessary balance in the second. That's at least an interesting insight, I think, even if Q_Goest can point to a wide range of current successes in the area of microcausation. I think to some extent, you get your returns in the places where you make your investments.
 
  • #121
Ken G said:
I'm not saying we need to connect to the quantum scale, I view the quantum scale as simply an example of the kinds of unexpected correlations that emerge only when you know what to look for. For example, in a quantum erasure experiment...

The lessons of quantum mechanics are completely relevant to my POV. What better example of downward causation is there than the idea that "observers" are required to develop a classical crispness in reality?

A quantum event, as in the eraser experiments, is shaped by the global constraints of the path set up by an experimenter. Unless you go to extreme ontologies like hidden variables or many worlds, you have to say there is nothing concretely "there" at the microscale apart from a quantum potential that then gets shaped to have an identity by the particular structure of classical world.

The classical world acts downwards to decohere the unformed potential of the quantum realm. There is nothing fixed at the QM scale, until it has been fixed.

And also consistent with my systems view, the classical world can only constrain the quantum potential, not determine it. Downward causation can restrict the degrees of freedom, but not remove them all. So there is still that essential "randomness" or indeterminacy about what actually does happen (even if statistically, that indeterminacy has hard bounds).

Q Goest mentions that this is the case...and then hurriedly says he doesn't want to go there. Indeed, it would be fatal to his insistence on causal supervenience. (Unless he resorts to hidden variables, many worlds, the usual attempts to avoid the "weirdness" of downward acting constraints by leaping to far weirder views of nature that appear to preserve the mechanical principle that all that exists is bottom-up atomistic construction).
 
  • #122
apeiron said:
The classical world acts downwards to decohere the unformed potential of the quantum realm. There is nothing fixed at the QM scale, until it has been fixed.
Yes, I agree quantum mechanics forms an excellent example of this kind of surprising phenomenon, even if brains are not manifestly quantum mechanical. In quantum mechanics, everything happens that can happen, and it is only sorting of possibilities by the outer-scale environment that decides what actually does happen and brings the phenomena into the classical realm of black-and-white thinking. There remains no self-consistent ontology (without paying a radical price of subordinating the observed phenomena to how we conceptualize it) for describing how the microcausations that appear in the Schroedinger equation can create a single emergent classical reality. Decoherence is how we treat this event, but it involves deciding what we will care about, not an internally self-consistent treatment. I think that is also a relevant analogy to what FEA does-- first decides what we will care about, and then sees what harvest we reap from those choices.
Q Goest mentions that this is the case...and then hurriedly says he doesn't want to go there. Indeed, it would be fatal to his insistence on causal supervenience.
I think the main source of disagreement is a choice of stance-- Q_Goest takes the position that he would prefer not leave a certain comfort zone, with demonstrable benefits, until he is absolutely certain it is required to do so, and what's more, he will tend to rig the meaning of "success" to increase its likelihood. We are taking the tack that it is better to jump into the unknown and murkier waters, and hope to discover something fundamentally new, than to avoid it simply because it is so murky.
(Unless he resorts to hidden variables, many worlds, the usual attempts to avoid the "weirdness" of downward acting constraints by leaping to far weirder views of nature that appear to preserve the mechanical principle that all that exists is bottom-up atomistic construction).
Interestingly, even many-worlds is not really an atomistic construction, it is about as holistic as they come. It begins and ends with accepting that the concept of a "state vector for the universe" is a meaningful and coherent construct, because if you start with that, and trust the Schroedinger equation, then you will always have it (and its encumbent many conscious worlds emerging within), and if you do not start with it, then you will never have it appear. You get out exactly what you put in, and no experiment tells you if putting it in has done anything useful for you, so it's purely a desire to apply top-down imagery that motivates many worlds in the first place. Elsewhere, I've made the point that if you think about it, pure states don't propagate top-down, they propagate bottom-up: you get a pure state when you break a piece off from a larger system and force it to satisfy certain constraints, and there's really no other way that we ever encounter pure states in quantum mechanical analysis.
 
  • #123
Q_Goest said:
The only problem I see with this is that there is nothing 'functionally meaningful' that is gained by reducing classical scale phenomena (such as Benard cells or the interactions between neurons) to quantum scale interactions.

But again, that is merely an epistemological point. We all agree (I think) that when reductionism works in the pragmatic sense of useful theories, then it works.

But what is under debate is whether the atomistic approach exemplified by FEA works when we consider complex systems such as brains that "have freewill, consciousness and the appearance at least of downward causation". Or indeed if we step back to take a systems view of physical reality itself (one that includes the quantum scale for example).

So arguing a modelling strategy works in some cases is not proving that it must work in all cases.

Unless you can make an ontological level case that the map is the terrain and reality really is just the sum of its parts. And here the observational evidence weighs heavily against you - such as the view from neuroscience and QM.
 
  • #124
Ken G said:
Interestingly, even many-worlds is not really an atomistic construction, it is about as holistic as they come. It begins and ends with accepting that the concept of a "state vector for the universe" is a meaningful and coherent construct, because if you start with that, and trust the Schroedinger equation, then you will always have it (and its encumbent many conscious worlds emerging within), and if you do not start with it, then you will never have it appear. You get out exactly what you put in, and no experiment tells you if putting it in has done anything useful for you, so it's purely a desire to apply top-down imagery that motivates many worlds in the first place. Elsewhere, I've made the point that if you think about it, pure states don't propagate top-down, they propagate bottom-up: you get a pure state when you break a piece off from a larger system and force it to satisfy certain constraints, and there's really no other way that we ever encounter pure states in quantum mechanical analysis.

I see what you mean but I think the critical question is about when does a developing QM potential encounter the global constraints that collapse it into a classical state.

Many worlds avoids the issue of this encounter because all destinies happen. So instead of a single global history, you just spawn endless micro-branches of history. Constraints break reality at a local level, atomising its history, rather than constraint being at the global level where it is more than the sum of its part and organising a single self-consistent history for reality.

Decoherence of course does not collapse the wave function but instead disperses it to a degree that it seems to have vanished as an issue - a Hopf flow model. But a systems view could fix this aspect of decoherence I believe by treating reality as a dissipative structure. The classical realm would have the downward causal power to actually collapse (that is constrain) any spreading QM degrees of freedom.

Pure state would propagate from the bottom-up on a probablistic basis because constraint is only constraint (the top-down causality is not a simple deterministic causality). All that constraint means is that the boundary conditions (such as the experimenter's set-up) exist in concrete fashion. But then the QM potential still has its remaining "internal" degrees of freedom which the constraints can never see.

Take spin for example. If I can constrain a point to a locale, I still have left open its potential to be spinning. Constraining the translational symmetries does not remove the degrees of freedom represented by the rotational symmetries.

So classical reality exists by constraining the degrees of freedom represented by the notion of a pure, or unconstrained, quantum realm. And instead of the "observer" being a mysterious device outside the system as it is in the reductionist view, the observer is clearly identified with the global constraints, the information that does the downward causation.

And this systems view also leaves room for the remaining quantum degrees of freedom that are the "weird" bit. When almost everything is being tidily regulated from the top-down, it can seem weird that the micro-scale is not actually under complete control but is still a little random.
 
  • #125
Maui said:
Just one example and i am leaving - the satellites that keep the GPS system working measure time differently, because it runs differently for observers in different referential frames. This is a fact backed up by hardcore science and thousands of expereiments. The implication of time 'flowing' differently is that your NOW has already happened(passed) in another frame of reference(e.g. that of the GPS clocks). It renders causality apparent(for some reason things seem(just seem) to have causes in the world of relativity). The other implication is that of free-will and free choice. It must also be just apparent. Add fields(the most consistent contemporary model we've build so far) and not just causality but everything observable is just excitations of a field(for some reason the excitations of the fields tend to conspire towards a seeming classical causality).

Ummmm... it seems nobody else is going to break this to you, so I will... what the hell are you talking about? I understand your example, which is not one I'd use, nor does it have ANYTHING to do with causality, and free will in this context.
 
  • #126
Q_Goest said:
Hi Pythagorean,

What is "it" you're referring to?

downward causation (in the strong sense). Isn't that what we were talking about?

I think you owe it to yourself to look at how neuronscience is treating the interactions of neurons at the level where computational modeling meets physical testing both disociated neurons and in vivo.

Considering that's what both my courses and research consist of, I don't know what you think I'm missing. As we all point out though, the model is not the reality. The models are (successfully) predicting deterministic behavior of neural networks, it doesn't pretend to be the full story. It's a story about the electrical signal carried by the neurons. We don't consider, for example, the effects of calcium-triggered quantal release in the synapse via SNARE and SNAP proteins, or the way in which the postsynaptic terminal influences the transcription factors of gene networks (neuroscience epigenetics). Many models don't even consider the complicated geometry and electrotonic properties of the dendritic processes. Many models don't consider the volume transfusion or global field effects. But they can, and especially if those behaviors are your interest.

There's still a lot we don't even know about neurotransmitter interactions in the synapse, and we're just beginning to understand the role of gap junctions in global regulation.

[...] there's nothing but local, causal interactions that create those global behaviors. That's the philosophy behind FEA* and it's the philosophy behind the compartment models used in neuroscience today.

I doubt that's any scientific field's prevailing philosophy, please provide clear evidence of this. I think this is your interpretation. Particularly that you put "nothing, but" as a qualifier. The prevailing philosophy is empiricism. To say there's nothing but local, causal interactions that create those global behaviors is a claim I've never seen. Whether it's true or not isn't important though. The interesting part of these complex systems is how the global behavior causes local interactions to occur, regardless of whether the global behavior originated from local interactions or not. We're not really interested in chicken and egg arguments in the lab; that's for philosophy forums.
 
  • #127
apeiron said:
Many worlds avoids the issue of this encounter because all destinies happen. So instead of a single global history, you just spawn endless micro-branches of history. Constraints break reality at a local level, atomising its history, rather than constraint being at the global level where it is more than the sum of its part and organising a single self-consistent history for reality.
And yet, it is really many-worlds which is holistic, and has a whole that is more than the sum of its parts. Many-worlds subordinates the physicist to the physics, and so invents a gossamer web of invisible coherences that act like glue between the islands of different worlds. The whole pure state is more than the sum of the "worlds", because the worlds lack these connections that the denizens of these worlds can never cross or even perceive. Some would call that holism on steroids, and prefer to subjugate the physics to the physicist. In that case, there is no need to restore anything that has been ruled out by the constraints of the "classical realm" (by which we both mean, observer effects). When physics is seen as the way a brain interacts with its environment, rather than the way an environment gives rise to a brain, there is no need for the concept of a unified state-- a state is merely whatever usefulness and consistency is left when all that is useless or inconsistent with constraints has been thrown out. So many-worlds is a glue, and Copenhagen is a sifter.
And this systems view also leaves room for the remaining quantum degrees of freedom that are the "weird" bit. When almost everything is being tidily regulated from the top-down, it can seem weird that the micro-scale is not actually under complete control but is still a little random.
An emerging idea is that physics, if not consciousness itself, involves that razor's edge between that which seems determined and that which seems random. That tennis match may echo your points about an interplay between bottom-up and top-down interactions.
 
  • #128
nismaratwork said:
Ummmm... it seems nobody else is going to break this to you, so I will... what the hell are you talking about? I understand your example, which is not one I'd use, nor does it have ANYTHING to do with causality, and free will in this context.



You should have probably first read my previous posts instead of yelling. Anyway, my latest post(that you quoted) was hypothetically taking into account a global scale view of reality(the so-called "God's eye view") and the role causality plays in view of relativity. Causality in the blockworld is not a fundamental feature of reality, instead - it's just an ordering of events in a causally-looking way when the observer is in a particular FOR in this Lorentz-invariant reality. As i stated earlier, this undermines the idea that things and events are really what they are because of causality(though they appear to be what they are because of causality). I stand by my words, causality very likely will not be a fundamental feature of a TOE, but just apparent/emergent(it doesn't really matter if everyone is already aware of this, but there is already a consensus on this among those who work on the foundational issues).

For causality to be fundamental, you'd need the universe of Isaac Newton. That universe is a mirage, however.


If you've been reading Luboš Motl's blog, you've probably seen where the highest bets are being placed(which confirms what apeiron and i said earlier on emergence):

http://motls.blogspot.com/2004/10/emergent-space-and-emergent-time.html
 
Last edited:
  • #129
Personally, I always get a chuckle when I see the term "fundamental" used in physics. What does that even mean? Perhaps when we look at the history of physics, we should start relating to what physics actually is rather than how we might like to imagine it. The natural conclusion is that the word "fundamental" by itself does not have meaning in physics, but "more fundamental" does. Given this, we should not be surprised that causality is not fundamental, but we can perhaps view it as "more fundamental" than a concept like space. The prevailing question of this thread is then, "which is more fundamental, causality or free will?" Or perhaps neither emerges from the other, but both emerge from something else.
 
  • #130
Ken G said:
Personally, I always get a chuckle when I see the term "fundamental" used in physics. What does that even mean? Perhaps when we look at the history of physics, we should start relating to what physics actually is rather than how we might like to imagine it. The natural conclusion is that the word "fundamental" by itself does not have meaning in physics, but "more fundamental" does. Given this, we should not be surprised that causality is not fundamental, but we can perhaps view it as "more fundamental" than a concept like space. The prevailing question of this thread is then, "which is more fundamental, causality or free will?" Or perhaps neither emerges from the other, but both emerge from something else.

Hmmm... The very seach for what we like to imagine it to be is what set Einstein on his course to take a "Heuristic" view of light. It also hobbled him later in life...

... what to conclude from that?
 
  • #131
Maui said:
You should have probably first read my previous posts instead of yelling. Anyway, my latest post(that you quoted) was hypothetically taking into account a global scale view of reality(the so-called "God's eye view") and the role causality plays in view of relativity. Causality in the blockworld is not a fundamental feature of reality, instead - it's just an ordering of events in a causally-looking way when the observer is in a particular FOR in this Lorentz-invariant reality. As i stated earlier, this undermines the idea that things and events are really what they are because of causality(though they appear to be what they are because of causality). I stand by my words, causality very likely will not be a fundamental feature of a TOE, but just apparent/emergent(it doesn't really matter if everyone is already aware of this, but there is already a consensus on this among those who work on the foundational issues).

For causality to be fundamental, you'd need the universe of Isaac Newton. That universe is a mirage, however.

I'm fairly sure that the Bohmians would take exception to your assessment, and many others at that. Moreover, any speculation about a theory that is as elusive as any is a bit absurdist given your lead-in, but... OK. I think you should take these thoughts to QM, and see how they fly there (hint... lead... brick...)


Maui said:
If you've been reading Luboš Motl's blog, you've probably seen where the highest bets are being placed(which confirms what apeiron and i said earlier on emergence):

http://motls.blogspot.com/2004/10/emergent-space-and-emergent-time.html

I agree with apeiron, but in part because he makes his argument using... arguments and references. You're making suppostions and personal speculation, and that's nothing to work with, even if I agree with your conclusions. Again, perhaps that is what Q_Goest found wanting in your post?
 
  • #132
nismaratwork said:
Hmmm... The very seach for what we like to imagine it to be is what set Einstein on his course to take a "Heuristic" view of light. It also hobbled him later in life...

... what to conclude from that?
Einstein was motivated by the search for something more fundamental, which is the highest goal of science. Why, then, do we feel the need to pretend that it was a search for something fundamental? Why can we not simply live in the truth?
 
  • #133
Ken G said:
Einstein was motivated by the search for something more fundamental, which is the highest goal of science. Why, then, do we feel the need to pretend that it was a search for something fundamental? Why can we not simply live in the truth?

Um... as much as I'm dying to say, "[We] can't HANDLE the truth!", I wont... I did.

Anyway, why?... because that search has continually borne fruit such as QM, and Relativity, and perhaps elements of String Theory. The search itself tends to drive the field forward, but then the search becoming myopic is clearly crippling.

Einstein certainly seemed to believe that there was an underlying elegance he could uncover, although it's true that he settled for "more" fundamental.

I'd add, the truth is ultimately inconsistant in the absence of a better means to join gravity and the forces described by QM.
 
  • #134
Ken G said:
But when trying to model something like free will, when will you claim success?

apeiron said:
So arguing a modeling strategy works in some cases is not proving that it must work in all cases.

Pythagorean said:
As we all point out though, the model is not the reality. The models are (successfully) predicting deterministic behavior of neural networks, it doesn't pretend to be the full story. It's a story about the electrical signal carried by the neurons. We don't consider, for example, . . . But they can, and especially if those behaviors are your interest.

I doubt that's any scientific field's prevailing philosophy, please provide clear evidence of this. I think this is your interpretation. Particularly that you put "nothing, but" as a qualifier. The prevailing philosophy is empiricism. To say there's nothing but local, causal interactions that create those global behaviors is a claim I've never seen. Whether it's true or not isn't important though. The interesting part of these complex systems is how the global behavior causes local interactions to occur, regardless of whether the global behavior originated from local interactions or not. We're not really interested in chicken and egg arguments in the lab; that's for philosophy forums.
Let’s talk about what these models are for a minute because I think the philosophy of why they are the way they are is being overlooked.

Going back to the 1950’s when computers were just starting to be used for modeling natural phenomena, the first use of those models was in aerospace where wings for example, needed to be very accurately modeled. In a landmark paper by Turner et.al., “Stiffness and Deflection Analysis of Complex Structures” (J of the Aeronautical Sciences) he talks briefly of the fundamental philosophy behind why the model is created the way it is. Remember that although he’s referring to stresses and displacements in a solid structure, the same philosophy is used for modeling fluid behavior such as for Benard cells and for modeling brain behavior by compartmental models such as the Blue Brain project as I'll show in a moment.
The analysis may be approached from two different points of view. In one case, the forces acting on the members of the structure are considered as unknown quantities. In a statically indeterminate structure [think of this as your “top down constraint”], an infinite number of such force systems exist which will satisfy the equations of equilibrium. The correct force system is then selected by satisfying the conditions of compatible deformations in the members. ...

In the other approach, the displacements of the joints in the structure are considered as unknown quantities. An infinite number of systems of mutually compatible deformations in the members are possible; the correct pattern of displacements is the one for which the equations of equilibrium are satisfied.

So what does that mean? He’s suggesting the fundamental philosophy behind all of classical mechanics. He’s pointing out that there needs to be an equilibrium condition at every point in the system. At every point, we have to have equilibrium conditions such as conservation of mass, energy, momentum, equilibrium of forces (think "free body diagrams"), etc ... This can be static equilibrium or dynamic equilibrium. The system as a whole isn’t in equilibrium unless all the parts within that system are also in equilibrium at all times.

The philosophy behind this model also points to a second fundamental premise. The forces or causal affects at every point within the system are local. Every local event is dependent on being in equilibrium, but it should also be emphasized that those equilibrium conditions are due to the local interplay of the parts. In the case of structural elements, those parts flex and create forces on neighboring elements due to their ability to compress or stretch (ie: the modulus of the material), the amount of mass and force (ie: stress) that allows for that part to accelerate, etc ...

In the Hodgkin-Huxley model of neurons, those elements of the neuron act like electrical elements such as resistors and capacitors, but the concept that they are similar to those electrical elements has nothing to do with the much more important and fundamental premise. The Hodgkin Huxley model could have used water pipes, valves and pressure vessels. In fact, electrical phenomena and fluids are highly analogous so it seems the use of electrical circuits was likely made because that is what they were familiar with and because they actually USED electrical components to test with as opposed to converting to a fluidics basis. But the model itself is inconsequential. The philosophy behind what’s going on in nature is what’s important, and for the compartmental models of the brain, regardless of what kind of parts (ex: resistors and wires or valves and pipes) we use to model neurons with, no matter what kind of mathematics we decide on, or what equations we use, there will always be the same philosophical premise that is identical for ALL of those models. That premise is twofold as described by Turner.

1. Static and/or dynamic equilibrium conditions must exist between every point in the system.
2. These equilibrium conditions are not affected by nonlocal causes, they are affected only by those conditions that are immediately local to the affected point in the system.​
One might now question whether this is REALLY a fundamental philosophical notion or is it just how science models nature in general. I’m sure that’s what must come through anyone’s mind as they consider the above. However, it is clear that discussions around emergence have popped up to address this. Bedau (“Weak Emergence” Philosophical Perspectives) also recognizes this when he refers to weakly emergent phenomena such as described for example by cellular automata:
The phrase “derivation by simulation” might seem to suggest that weak emergence applies only to what we normally think of as simulations, but this is a mistake. Weak emergence also applies directly to natural systems, whether or not anyone constructs a model or simulation of them. A derivation by simulation involves the temporal iteration of the spatial aggregation of local causal interactions among micro elements. That is, it involves the local causal processes by which micro interactions give rise to macro phenomena. The notion clearly applies to natural systems as well as computer models. So-called “agent-based” or “individual based” or “bottom up” simulations in complexity science have exactly this form. They explicitly represent micro micro interactions, with the aim of seeing what implicit macro phenomena are produced when the micro interactions are aggregated over space and iterated over time. My phrase “derivation by simulation” is a technical expression that refers to temporal iteration of the spatial aggregation of such local micro interactions.

That really is how science and philosophy view the world. There is no room for “downward causes” that interact at the local level, that might force the system to change in a way that isn’t predictable from examining the local causal events. I will only make this caveat that these models are how classical mechanical interactions are made, and not how quantum mechanical interactions are viewed. That is a different issue and isn’t of relevance here which is the ONLY reason I’m not going into that issue. If this above description of classical mechanics and the philosophy behind it isn’t clear and understood, going into the QM description is only going to confound things. Neurons interact at a classical scale, there are no quantum mechanical interactions between them that might lead to nonlocal causes arising from ‘top down constraints’. If/when such a phenomena is shown to be pertinent to how a brain works, that’s fine. But for now, no one should be bringing quantum mechanics up as a reason to believe there's room for downward causation or any similar concept that might be put into different terms such as 'top-down' constraints. Such terms are misleading since such things as top down can mean how a door hinge makes the door rotate only around some given axis which certainly isn't any kind of downward causation as referred to in philosophy of science.

One might now ask if downward causation (sometimes referred to as “strong downward causation”) has anything left to do with any of this? If events are local only as indicated above, how can global states intervene in the local events? Strong downward causation addresses this and suggests that “macro-causal powers have effects at both macro and micro levels, and macro-to-micro effects are termed downward causation.” (Bedau). Emmeche et al describes it as “a given entity or process on a given level may causally inflict changes or effects on entities or processes on a lower level”. These authors and many more including the paper by Farkas mentioned by apeiron all dismiss strong downward causation as crackpot science. There’s no evidence for it and there’s no support for it. For future reference, please don’t point to crackpot web sites that suggest strong downward causation is some kind of debatable concept. If a paper suggests there is some kind of top down constraint, it doesn’t necessarily refer to strong downward causation. The concept of strong downward causation needs to go back into the closet it came out of, as does any similar argument that suggests micro level causes are commandeered to act differently in different systems.
 
  • #135
nismaratwork said:
Um... as much as I'm dying to say, "[We] can't HANDLE the truth!", I wont... I did.
:D
Anyway, why?... because that search has continually borne fruit such as QM, and Relativity, and perhaps elements of String Theory. The search itself tends to drive the field forward, but then the search becoming myopic is clearly crippling.
The search doesn't require adopting a belief system, it just involves doing science. The only required faith is the faith in the process, not the faith in the outcome. In fact, I would argue that faith in the outcome is what tends to close our minds to future advances, rather than the opposite. The same might be relevant to upwardly-causal approaches to free will.
Einstein certainly seemed to believe that there was an underlying elegance he could uncover, although it's true that he settled for "more" fundamental.
And indeed, elegance is certainly part of what we are seeking. We should look for elegance, and be happy when we find it, but not be seduced by it.

I'd add, the truth is ultimately inconsistant in the absence of a better means to join gravity and the forces described by QM.
Right, and although it is natural to always try to find more fundamental unifications that correct inconsistencies, it is unnatural to expect that physics will ever be absent of inconsistencies. Never was, why do we imagine it ever will be?
 
  • #136
Ken G said:
:D
The search doesn't require adopting a belief system, it just involves doing science. The only required faith is the faith in the process, not the faith in the outcome. In fact, I would argue that faith in the outcome is what tends to close our minds to future advances, rather than the opposite. The same might be relevant to upwardly-causal approaches to free will.
And indeed, elegance is certainly part of what we are seeking. We should look for elegance, and be happy when we find it, but not be seduced by it.

I agree.


Ken G said:
Right, and although it is natural to always try to find more fundamental unifications that correct inconsistencies, it is unnatural to expect that physics will ever be absent of inconsistencies. Never was, why do we imagine it ever will be?

I don't know... I think people expect the universe to conform to an anthropic view.
 
  • #137
nismaratwork said:
I don't know... I think people expect the universe to conform to an anthropic view.
Right, and we always think the ancient Greeks were so naive to put the Earth at the center! Still haven't learned that lesson.
 
  • #138
Ken G said:
Right, and we always think the ancient Greeks were so naive to put the Earth at the center! Still haven't learned that lesson.

I think it's amazing that we've come so far based on such limited personal experiences... it's a testament to the scientific method IMO.
 
  • #139
Agreed. Now the question is, where does the scientific method lead into the study of free will, and do we need a few new tricks?
 
  • #140
Ken G said:
Agreed. Now the question is, where does the scientific method lead into the study of free will, and do we need a few new tricks?

I'm stumped when it comes to that application... I feel as though we're trying to visualize something based on concepts like the mind and consciousness, which are not well defined. Well, they tend to be somewhat fluid at least, and each new discovery seems to raise questions in the philosophical arena, not resolve them.
 

Similar threads

Replies
190
Views
9K
Replies
14
Views
7K
Replies
2
Views
2K
Replies
12
Views
1K
  • Quantum Interpretations and Foundations
Replies
2
Views
776
  • General Discussion
6
Replies
199
Views
31K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
2K
Replies
6
Views
742
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Quantum Interpretations and Foundations
7
Replies
213
Views
10K
Back
Top