Neural correlates of free will

In summary, Benjamin Libet's work suggests that our decisions to act occur before our conscious awareness of them. This problem for the idea of free will is that it seems to imply an either/or battle between determinism and free will. Some people might try adopting the approach that the neurological correlates of free will are deterministic (if one does wish to adopt a kind of dualistic picture where all that is physical is deterministic and free will is housed in some extra-physical seat of conscious choice). Others might look critically at the very assumption that physically identifiable processes are deterministic in some "absolutely true" way, such that they could preclude a concept of free will.
  • #71
Ken G said:
Ah, interesting question. No doubt this is indeed the central basis of the Platonist idea that mathematical truths lie at the heart of reality, such that when we discover those truths, we are discovering reality.

Plato was really more concerned with the form of things than mathematical truth as such. But maths is the science of patterns, so there is a large overlap.

The modern way of viewing this would be that the realm of form (as the counterpart to the realm of substance) is about self-organisation. Or global constraints (yes, sigh). It is about all the self-consistent patterns that can exist. And so it is all about symmetry principles. Changes which are not a change.

Maths looks like reality as maths creates a library of possible pattern descriptions and reality is a self-organising pattern.

Wolfram's cellular automata project was so funny because he took a very simple pattern generator and then just exhaustively generated every possible pattern to see which ones resembled reality.

But in general, this is what maths does. It defines some broad axiomatic truths (it creates some global constraints) and then generates all the possible patterns made possible by those constraints.

The problem then is that the axioms can't be known to be true (even if the consequences that flow from the axioms are deterministic, or at least taken to be proven step by step).

So the forms are real. But the maths is the modelling of forms.

However, maths can also hope to model self-organisation itself. Which is where chaos theory for example comes in as a hugely successful way of modelling "random nature".

Key constraints (on linearity for instance) are relaxed. The system is then allowed to organise its own constraints as part of what it does.

This is why maths can make historical progress. The early work was overloaded with presumed constraints (such as a presumption space was flat, it had just three dimensions, time was a separate clock, etc). Too much was taken as globally axiomatic when it was only specific to a local system.

But maths has just kept relaxing the constraints so as to arrive at the fewest and simplest axioms. And then more recently started to do the other part of the job - invent models of constraint development. Models of global self-organisation.

So first strip the constraints out, then find the way they can build themselves back in. Once maths reaches this level of modelling, it really will be powerfully close to the truth of a self-organising reality.
 
Physics news on Phys.org
  • #72
Lievo said:
Don't you think these http://en.wikipedia.org/wiki/Gun_(cellular_automaton)" challenge this view?
I responded to this one above.
 
Last edited by a moderator:
  • #73
Pythagorean said:
The truncaton error is not significant. If I run the same deterministic system twice, I won't get different results because of truncation error. The two systems will have exactly the same fate.

Well you agree that the map is not the terrain.

Of course the map, the computer simulation MUST keep repeating its truncation errors at each iteration. The computer has been designed to be able to do just this...without error :biggrin:.

But the point is the reality is doing something else. There are no truncation errors in its "computation" of a chaotic trajectory.

The simulation actually starts on a different trajectory with every iteration because a truncation error changes what would have been the exact next state every time. The shadowing lemma just argues this is not a significant problem in the pragmatic sense. However in philosophical arguments demanding absolutes, such as absolute determinism, truncation error is indeed "the dirty secret".
 
  • #74
apeiron said:
So first strip the constraints out, then find the way they can build themselves back in. Once maths reaches this level of modelling, it really will be powerfully close to the truth of a self-organising reality.
I think that approach is probably a description of the next-level of sophistication of models, rather than what makes the models the reality. We must always avoid the mistake of thinking that we are one step away from making our models real! People have pretty much always thought that. But I agree with the sentiment you are expressing that seeking constraints was too brute-force of an approach, and symmetries are in some sense the opposite of constraints-- they are patterns that emerge from necessity more so than fiat, because all math is a search for order, and the first step for finding order is not the search for how to strong-arm the behavior into what you want, but rather how to recognize all the things you imagined were different but really weren't. That's how you throw out complexity without throwing out the "business end" of what you want to be left with-- and only after you've done that should you start looking for the rules of behavior.

It's an interesting insight that sometimes the rules require re-introducing the complexity that had been removed, like the way symmetry-breaking gives rise to interactions, played out against the backdrop of approximately unbroken symmetries. So I'm sympathetic to your perspective that the next place models need to go is into the realm of self-organization and top-down contextual support for constraints, rather than trying to intuit the bottom-up local constraints that will magically make it all work. But I still think that when you have done that, you will have just another set of models-- useful for doing what you wanted them to do, and important expressly because they represent a new kind of want.
 
  • #75
Q_Goest said:
I responded to this one above.
What was your response?
 
  • #76
lievo said:
what was your response?

.
..:
 
  • #77
Ken G said:
one might say "see, it's still fundamentally mathematical, it's just different mathematics." To which I would respond, how many times are we allowed to say "OK, we were wrong before, but this time we have it right."
Ah I'm surprised you miss here an an important distinction to make between mathematical systems and mathematical models. Newton to relativity is not a switch between systems, it's a switch between models, and different models is not different mathematics, it's different... well models. The formalist-platonic match is about if there exists a single system that is at the root of reality or if systems of axioms are human choice that may have little to do physical reality. Up to now I'm not aware of any scientific theory that is not a model of Peano arithmetics (which is a system), so it's not different mathematics, only different models of the same mathematic.
 
  • #78
Ken G said:
I think that approach is probably a description of the next-level of sophistication of models, rather than what makes the models the reality. We must always avoid the mistake of thinking that we are one step away from making our models real!

Oh I agree completely. The map will still not be the terrain. But still, I wanted to highlight what real progress would look like.

One of the complaints about modern maths is that it has departed reality modelling and now spends too much time inventing fictional realities. And it is true. A lot of what would be studied and published is just creating an unnecessary clutter of journal papers.

But this masks a general progress in relaxing the constraints. For instance, projects like category theory. And it also masks the new thing of modelling self-organisation.

and symmetries are in some sense the opposite of constraints

I would say that symmetries are what you get having stripped out all constraints. And so symmetries become a fundamental ground on which you can start modelling the self-organising development of constraints.

This is precisely the metaphysics of CS Peirce, what he was talking about with his process of abduction, his machinery of semiosis, his logic of vagueness.

It's an interesting insight that sometimes the rules require re-introducing the complexity that had been removed, like the way symmetry-breaking gives rise to interactions, played out against the backdrop of approximately unbroken symmetries. So I'm sympathetic to your perspective that the next place models need to go is into the realm of self-organization and top-down contextual support for constraints, rather than trying to intuit the bottom-up local constraints that will magically make it all work. But I still think that when you have done that, you will have just another set of models-- useful for doing what you wanted them to do, and important expressly because they represent a new kind of want.

Yes, the map is not the terrain. No argument.

But as you say, the big switch that so many are struggling to take is the shift from the local view to the global one.

People are still trying to bury the global organisation in the local interactions. They still want to do the old-school atomism that worked so well at the birth of modern science - the atomism that scooped up all the low hanging fruit and then pretended the orchard was now bare.

Atomism has become nothing less than a scientific faith, a religion. There is only one answer when it comes to modelling, and all other heretical views must burn at the stake. :tongue2:

That is what makes PF so enjoyable. It is a natural bastion of reductionist fundamentalism. On other academic forums where I am just surrounded by biologists and neuroscientists, the debates are only about hair-splitting differences in shades of opinion. Like should we call this self-organising view of thermodynamics that has emerged, the maximum energy dispersion principle (MEDP) or the maximum entropy production principle (MEPP)?
 
  • #79
Lievo said:
Ah I'm surprised you miss here an an important distinction to make between mathematical systems and mathematical models.
But all I'm saying is that one cannot make the argument that reality is fundamentally mathematical on the basis of the success of current models, without the argument falling under the weight of all the past models that were replaced by current models. One is essentially claiming that there is something special about the present, which seems like an illusory claim.
The formalist-platonic match is about if there exists a single system that is at the root of reality or if systems of axioms are human choice that may have little to do physical reality. Up to now I'm not aware of any scientific theory that is not a model of Peano arithmetics (which is a system), so it's not different mathematics, only different models of the same mathematic.
It is true that the core system of mathematics, a la Whitehead and Russell or some such thing, has not really changed, but that system by itself (and now if we wish to use the term "system", we will need to distinguish mathematical systems from physical systems) is never used to make the argument that reality is fundamentally mathematical. That mathematical system is based on axioms, and the axioms have never been found to be inconsistent, that is true-- but a system of finite axioms is a far cry from reality. To make contact with reality, the system must be injected into a model that invokes postulates, by which I mean things we have no a priori idea are true or not (whereas axioms must seem true), and that's essentially what physics is. So when someone claims reality is mathematical, it is not because of the success of Peano axioms or some such, it is because of the success of those physical postulates. And those are what keep changing-- the very success the argument is shouldered by keeps letting the argument down.
 
  • #80
octelcogopod said:
But isn't the collective whole of all the macroscopic systems actually one big deterministic one at the lowest level?
Do you mean is it really deterministic, i.e., determined, or do you mean can we find value in modeling it with deterministic mathematics? The latter is certainly true, but the former claim is the one I am calling into doubt.
Also just for hypothesizing; wouldn't the universe during the big bang start out with almost no separated states and then evolve into many different states and thus create increased separation of systems, but ultimately be one big system?
It sounds like you are asking if there is such a thing as "the state of the universe as a whole." I think that's an important question, and people do tend to talk about such a state, but no one has ever demonstrated the concept really makes any sense. For one thing, it leads us to the many-worlds interpretation of quantum mechanics, which to some seems quite absurd.
 
  • #81
Hi Lievo,
Lievo said:
What was your response?
See post 54, page 4.
 
  • #82
Pythagorean said:
Anyway, as Kens been saying. The computer models are tools for analysis, not deep philosophical statements.
apeiron said:
But it makes a myopic basis for philosophy as it leaves out the other sources of causality.
This is a good discussion, but I disagree. There IS a deep philosophical reason why FEA is done the way it is. It isn't just so we can model something, and it DOES take into account every known source of causality. Benard cells for example, are highly dynamic, highly nonlinear phenomena (that have been used as a potential example of downward causation) that is http://wn.com/Rayleigh-Bénard" only because it can take into account ALL the causality and ALL the philosophy of why fluids behave as they do. There's no need to suggest some kind of downward causation when local interactions are sufficient to bring about all the phenomena we see in Benard cells.

That same philosophy is used for modeling neurons in the brain, not just because it works, but because the philosophy of why the brain and neurons are doing what they do matches what science believes is actually happening. In short, the basic philosophy FEA analysis adheres to is that of first breaking down a complex structure, and then calculating the conditions on the individual elements such that they are in dynamic equilibrium both with the overall boundary conditions acting on the system and also with each other.

Now I’m not a neuroscience expert by any stretch. But it’s clear that neuroscience is also dedicated to using the same computational approach engineers use. They use compartment models just as FEA uses elements. Here's a picture of what you might see on the computer screen when you create these models:
[PLAIN]http://www-users.mat.umk.pl/~susur/1b.jpg
Again, this isn’t done simply because they want to model something. It’s done because that’s how they believe the brain works. These models include all the causality and all the philosophy of what’s going on and their models are approaching the point where they are matching experiment. Two common software programs I’ve heard of are “http://www.neuron.yale.edu/neuron/" (FEA) on a grand scale.

Here’s a few examples:
Herz, “Modeling Single-Neuron Dynamics and Computations: A Balance of Detail and Abstraction” (Science 314, 80 (2006))
Herz discusses 5 different “compartment models” and how and why one might use one compared to another. Detailed compartment models are listed in Roman numerals I to V. Model III is the common Hodgkin-Huxley model. Herz concludes with:
These developments show that the divide between experiment and theory is disappearing. There is also a change in attitude reflected by various international initiatives: More and more experimentalists are willing to share their raw data with modelers. Many modelers, in turn, make their computer codes available. Both movements will play a key role in solving the many open questions of neural dynamics and information processing – from single cells to the entire brain.

Another typical paper by Destexhe, “Dentritic Low-Threshold Calcium Currents in Thalamic Relay Cells” J. of Neuroscience, May 15, ’98, describes work being done to compare dissociated neurons (in petri dishes) to neurons in vivo. I thought this one was interesting because it is explicitly stated the presumption that neurons can be made to function in the petri dish exactly as they do in vivo. He also uses the computer code NEURON as mentioned above.

There’s literally tons of papers out there that show how neurons are made to act exactly as local causal physics would have them (ie: weak emergence). Yes, neurons are highly nonlinear and yes to some degree they exhibit stoichastic behavior - to the experimentalist; which begs the question of whether or not they truly are probabalistic or are there ‘hidden variables’ so to speak, that we simply haven’t nailed down? Even if we find that neurons exhibit truly probabalistic behaviors such as for example, radioactive decay exhibits, is that single feature of a neuron truly going to lead us to finding “free will”?
 
Last edited by a moderator:
  • #83
Q_Goest said:
There IS a deep philosophical reason why FEA is done the way it is. It isn't just so we can model something, and it DOES take into account every known source of causality.
I agree this is a good discussion, and I would say that what your posts are primarily doing is making very explicit the good things that FEA models do. There is no dispute these models are powerful for what they are powerful at, the overarching question is, can they be used to model free will, or are they in some sense leaving that behind and then reaching conclusions about what it isn't.
That same philosophy is used for modeling neurons in the brain, not just because it works, but because the philosophy of why the brain and neurons are doing what they do matches what science believes is actually happening.
This is the place where I would say your argument keeps hitting a wall. Science has no business believing anything about what is actually happening, that is an error in doing science that has bit us over and over. It's simply not science's job to create a belief system, it's science's job to create models that reach certain goals. So science starts by identifying its goals, and then seeking to achieve them. There's simply no step in the process that says "believe my model is the reality." The whole purpose of a model is to replace reality with something that fits in our head, and makes useful predictions. But if the nature of what the model is trying to treat changes, if the questions we are trying to address change, then we don't take a model that is built to do something different and reach conclusions in the new regime, any more than we try to drive our car into the pond and use it like a submarine. It just wasn't built to do that, so we need to see evidence that FEA methods treat free will, not that they are successful at other things so must "have it right". We don't take our models and tell reality how to act, we take how reality acts and see if we can model it. How does one model free will, and where is the evidence that an FEA approach is the way to do it?

In short, the basic philosophy FEA analysis adheres to is that of first breaking down a complex structure, and then calculating the conditions on the individual elements such that they are in dynamic equilibrium both with the overall boundary conditions acting on the system and also with each other.
The whole is the sum of its parts. Yes, that does seem to be the FEA approach-- so what we get from it is, whatever one gets from that approach, and what we don't get is whatever one doesn't get from that approach. We cannot use validity in some areas as evidence of uniform validity. It's not that FEA is doing something demonstrably wrong, it is that it might just not be doing something at all.
Even if we find that neurons exhibit truly probabalistic behaviors such as for example, radioactive decay exhibits, is that single feature of a neuron truly going to lead us to finding “free will”?
I agree that random behavior is no better than deterministic behavior in getting free will, we are saying we don't think that either mode of operation is going to get to the heart of it. It's just something different from either deterministic or random behavior of components, it's something where the whole cannot be expected to emerge from algorithmic behavior of individual parts. The algorithms of a successful model will need to be more holistic-- if the entire issue is algorithmic in the first place (which I find far from obvious, given the problems with using our brains to try and understand our brains).
 
  • #84
Ken G, Apeiron, Q_Goest, Lievo:... I'm out of my depth philosophically, but I just want to say that it's a real pleasure reading this interchange.


@Q_Goest: That's a very useful model... how did you get into this without a primary interest in neuroscience?

This is all more (pleasantly) than I expected from Philosophy... even here.
 
  • #85
Q_Goest said:
There’s literally tons of papers out there that show how neurons are made to act exactly as local causal physics would have them (ie: weak emergence). Yes, neurons are highly nonlinear and yes to some degree they exhibit stoichastic behavior - to the experimentalist; which begs the question of whether or not they truly are probabalistic or are there ‘hidden variables’ so to speak, that we simply haven’t nailed down? Even if we find that neurons exhibit truly probabalistic behaviors such as for example, radioactive decay exhibits, is that single feature of a neuron truly going to lead us to finding “free will”?

I ask about brains and you talk about neurons! So the scale factor is off by about 11 orders of magnitude. :rolleyes:

But anyway, your presumption here is that neurons make brains, whereas I am arguing that brains also make neurons. The system shapes the identity of its components.

One view says causality is solely bottom-up - constructed from atoms. The other says two kinds of causality act synergistically. There is also the top-down constraints that shapes the atoms. So now both the atoms and the global scale emerge jointly in a process of differentiaton~integration.

Absolutely everything is "just emergent".

The FEA approach you describe only works because the global constraints are taken as already in existence and so axiomatic. What does not change does not need to be mentioned when modelling.

So take a benard cell. An entropic gradient is presumed. The source and the sink are just there. The model does not seek to explain how this state of affairs developed, just what then happens as a consequence.

Order then arises at a critical temperature - the famous hexagonal cells. Local thermal jostlings magically become entrained in a global scale coherent motion.

Now these global scale cells do in fact exert a downward causal effect. As just said, they entrain the destinies of individual molecules of oil. This is what a dissipative structure is all about. Global constraints (the order of the flow) acting to reduce the local degrees of freedom (the random thermal jostle of the molecules become suddenly far less random, far more determined).

So benard cells are frequently cited as an example of self-organisation due to the "mysterious" development of global order.

There are other features we could remark on, like the fact that the whorls are hexagonal (roughly) rather than circular. The fact that the activity is confined (globally constrained) reduces even the "local degrees of freedom" of these benard cells. Circular vortexes are the unconstrained variety. Hexagonal ones are ones with extra global constraints enforced by a packing density.

Note too that the macro-order that the benard cell is so often used to illustrate is a highly delicate state. Turn the heat up a little and you have soon the usual transition to chaos proper - whorls of turbulence over all scales, and no more pretty hexagonal cells.

In a natural state, a dissipative structure would arrange itself to maximise entropy through-put. The benard cell is a system that some experimenter with a finger on the bunsen burner keeps delicately poised at some chosen stage on the way to chaos.

So the benard cell is both a beautiful demonstration of self-organising order, and a beautifully contrived one. All sorts of global constraints are needed to create the observed cellular pattern, and some of them (like a precisely controlled temperature) are wildly unnatural. In nature, a system would develop it global constraints rapidly and irreversibly until entropy throughput is maximised (as universality describes). So the benard cell demonstration depends on frustrating that natural self-organisation of global constraints.

So again, the challenge I made was find me papers on brain organisation which do not rely on top-down causality (in interaction with bottom-up causality).

Studying neurons with the kind of FEA philosophy you are talking about is still useful because it allows us to understand something about neurons. Reductionism always has some payback. But philosophically, you won't be able to construct conscious brains from robotic neurons. Thinking about the brain in such an atomistic fashion will ensure you will never see the larger picture on the causality.
 
  • #86
I was looking to see if I had something to add and found I did not answer to the following. Sorry being late.

apeiron said:
But BPP assumes determinism (the global constraints are taken to be eternal, definite rather than indefinite or themselves dynamic). So no surprise that the results are pseudo-random and Ockham's razor would see you wanting to lop off ontic randomness.
Yes (BPP is computable so determinist in my view), but I'm not so sure there is no surprise in P=BPP. Maybe we should wait for a formal proof that the equality truly holds.

PS:
apeiron said:
I ask about brains and you talk about neurons! So the scale factor is off by about 11 orders of magnitude. :rolleyes:
[STRIKE]12, actually.[/STRIKE]
EDIT forget that :blushing:
 
Last edited:
  • #87
Ken G said:
I agree that random behavior is no better than deterministic behavior in getting free will, we are saying we don't think that either mode of operation is going to get to the heart of it. It's just something different from either deterministic or random behavior of components, it's something where the whole cannot be expected to emerge from algorithmic behavior of individual parts. The algorithms of a successful model will need to be more holistic-- if the entire issue is algorithmic in the first place (which I find far from obvious, given the problems with using our brains to try and understand our brains).

To reconnect to the OP, I would restate that the top-down systems argument is that the determined and the random are what get shaped up down at the local level due to the self-organisation of global constraints.

So down at the ground level, there are just degrees of freedom. As many as you please. Then from on high comes the focusing action. The degrees of freedom are constrained in varying degree.

When highly constrained, there is almost no choice but to act in some specified direction. The outcome looks deterministic (and can be modeled as deterministic in the language of atomism/mechanicalism).

When weakly constrained, there is lots of choice and the direction of action becomes "random". Things go off in all the directions that have been permitted.

An analogy might be piston engine. The gas explosion has many degrees of freedom when unconstrained. But confined to a chamber with a piston, the result becomes deterministic. The action has a single available direction.

Freewill, voluntary action, selective attention, consciousness, etc, are all words that recognise that our reality comes with a global or systems level of downwards acting causality. We can organise our personal degrees of freedom in a way that meets evolutionary needs. My hand could wander off in any direction. But I can (via developmental learning) constrain it go off in useful directions.

There is no battle going on in my brain between the dark forces of determinism and randomness. I am not a slave to my microcauses. Instead, just as I experience it, I can exert a global constraint on my microscale that delivers either a highly constrained or weakly constrained response.

I can focus to make sure I pick up that cup of coffee. Or I can relax and defocus to allow creative associations to flow. There is a me in charge (even if I got there due to the top-down shaping force of socio-cultural evolution :devil:).

It is only when I swing a golf club that I discover, well, there is also some bottom-up sources of uncontrolled error. There is some damn neural noise in the system. The micro-causes still get a (constrained) say!
 
  • #88
Lievo said:
[STRIKE]12, actually.[/STRIKE]
EDIT forget that :blushing:

Hmm, 10^11 neurons last time I counted, and 10^15 synaptic connections :tongue:.

But in actual size scale, just a 10^5 difference between neurons and brains. So an exaggeration there.
 
  • #89
Ken, apeiron:

Yes, we agree the map is not the territory. But the point about error truncation is that it's not particularly different than linear systems. It still happens whenever you stray from integers. This is a computational problem, not a theoretical problem.

That it's chaotic means you have to test your systems robustness which means varying parameters and initial conditions over a wide range of values so that you can say "this behavior occurs in this system over a wide range of conditions". It really has nothing to do with the error truncation, and only the chaotic nature of the system itself. We really have quite sophisticated methods for handling that technical detail (that's the problem of digital signal processing, not theoretical science; I was always concerned about this coming into the research, but I recognize the difference now after hands-on experience formulating robustness tests. In fact, I so doubted my advisor's assurance at the time that I strenuously tested the error tolerance of my system to see how it changed the results, which is computationally expensive and revealed that her intuition was correct).

What's being studied in complex systems is general behavior (the fixed-points of the system: stable, unstable, foci, saddle-points, limit cycles, etc) and the bifurcations (major qualitative changes in the system as a function of quantitative changes). Whether a particle went a little to the left or a little to the right is not significant to the types of statements we make about the system (which are not reductionist, deterministic statements, but general behavioral analysis. The plus side is that they can be reduced to physiologically meaningful statements that experimental biologists can test in the lab (as was done with the cell-cycle; a case where theoretical chaos and reductionist biology complemented each other, despite their apparent inconsistencies).
 
  • #90
Hi Ken,
Ken G said:
There is no dispute these models are powerful for what they are powerful at, the overarching question is, can they be used to model free will, or are they in some sense leaving that behind and then reaching conclusions about what it isn't. This is the place where I would say your argument keeps hitting a wall.
In the course of this thread, I am making an effort (apparently successfully) not to provide my opinion on free will. I think there’s a common desire for humans to believe that our feelings and emotions (our phenomenal experiences) actually make a difference. We want to believe we are not automatons, that we have “free will” and our experiences really matter. We intuitively feel that there is something different about our experience of the world and that of an automaton, and therefore, the computational paradigm must somehow be wrong.

The paper by Farkas that I discussed previously is (partly) about how mental causation is epiphenomenal. Frank Jackson wrote a highly cited paper suggesting the same thing called “Epiphenomenal Qualia”. However, epiphenomenal qualia and theories behind it, run into a very serious logical problem, one that seems to point to the simple fact that mental causation must be a fact, that phenomenal experiences must account for something, and they must somehow make a difference in the physical world. That logical problem is called the knowledge paradox. In brief, if phenomenal events are epiphenomenal and mental causation is false, then how are we able to say anything about them? That is, how can we say that we are experiencing anything if we aren’t actually reporting what we are experiencing? In other words, if we're saying we are having a phenomenal experience, and that experience is epiphenomenal, meaning that the phenomenal experience CAN NOT INFLUENCE ANYTHING PHYSICAL, then how is it we are acknowledging this experience? How is it we are reporting what we experience if not for the simple fact that the physical words coming out of our mouth are causally influenced by the phenomenal experience? If phenomenal experiences are really not causing anything, then they can't enter the causal chain and they can't cause us to report in any way/shape/form that experience. They are not phenomena that can be reported unless they somehow influence a person to reliably report the phenomena.

The solution to that question as provided by Farkas or Jackson or Jaegwon Kim or anyone else that’s tried to respond to it - is that there is a 1 to 1 relationship between the physical “supervenience basis” (ie: the neurons) and the phenomenal experience. What they’re saying is that the experience of some event, such as the experience of seeing a red fire truck, hearing the sirens, and smelling the diesel fumes as it passes, is “the same as” the physical interaction of all those neurons on which the experience relies. So yes, that experience relies upon the interaction of neurons, and we might say that we had an experience of the sight, sound and smell of the fire truck as it passed. But if this experience is truly epiphenomenal then we have no ability to report it. We have no ability to say “I experienced the site of red, the sound of “waaaaaaa” and the smell of diesel exhaust. It wasn’t just a physical thing, I actually experienced it.”

Why don’t we have an ability to report the experience? Because the experience is epiphenomenal, meaning that what these people are really wanting us to believe is that I’m not saying I saw red, and I’m not saying I heard the siren, and I'm not telling you about the smell of diesel fuel. Those expressions came out of my mouth merely because of the physical neuron interactions and because there is a 1 to 1 correlation between the phenomenal experience and the physical interactions. But it wasn’t the phenomenal experience that made me say anything, it was the physical interactions. So in short, there is no room to “reliably report” any phenomenal experience. The fact that I actually was able to behave as if I experienced those things and report my experience, is only due to serendipity. My report that there was an actual correlation between the phenomenal state and the physical state is utterly fortuitous. This line of reasoning was brought out by Shoemaker, “Functionalism and Qualia” and also Rosenberg, “A Place for Consciousness”.

I personally don’t believe that every time I say I’ve experienced something or act as if I did, the actual reason I’m saying and acting in such a manor is that there just happens to be a fortuitous correlation between those phenomenal experiences and the physical supervenience basis. That said, I still accept in totality, the philosophical basis that our science relies on. FEA is a correct philosophy, but it is not inconsistent with mental causation.
 
  • #91
Q_Goest said:
There’s literally tons of papers out there that show how neurons are made to act exactly as local causal physics would have them (ie: weak emergence). Yes, neurons are highly nonlinear and yes to some degree they exhibit stoichastic behavior - to the experimentalist; which begs the question of whether or not they truly are probabalistic or are there ‘hidden variables’ so to speak, that we simply haven’t nailed down? Even if we find that neurons exhibit truly probabalistic behaviors such as for example, radioactive decay exhibits, is that single feature of a neuron truly going to lead us to finding “free will”?

Well, first, I think we all agree the notion of "free will" is already construed, don't we?

If we have any willpower, it's severely limited. Besides being confined by physical laws, as you probably know, there are a number of experiments that can show, at the least, that short-term free will is questionable. We can mark a lot of correlations between education, social class, and crime. We can find genes that link to behavior. If there's any free will in a single individual, it's a very weak force.

I don't see what "downward causation" really means. Physically, it doesn't seem any different from constraints. Constraints can be reduced to particle interactions themselves. And even if those constraints are holonomic, they can still be modeled as function of more degrees of freedom (though stochastic models are sometimes more successful). At some point though, you have to talk about what the initial conditions are for those degrees of freedom and how they arose. Once you model the whole universe, that becomes paradoxical... do you just weave them back into your system so you have one big closed loop? If matter and energy are to be conserved, it would appear so; and that would relieve the paradox (but I'm obviously speculating, here).

To me, "downward causation" seems to be an anthropomorphic desire to inject the subjective human quality of "willpower" into interpretations of global physical events. The only thing, to me, that makes global events significant, is the observer that sees several small events occurring at the same time and makes up a story so that it's all one big picture; that way the observer can have a stable world view. Evolutionary, of course, this makes sense, because it helps us (though bayesian learning) to instigate behavior towards food and shelter and away from danger.

Do I deny that, for instance, language and society influence the personality of an individual? Not at all. But it could simply be the case of the right reduced events happening at the right time that are often correlated together (so we see the global event as significant with our human brains).

That there's a subjective experience arising is another thing that so far, we can't touch, but through our research, we begun to gain an understanding of what the subjective experience is and is not... hopefully this will lead us to a mechanism for subjectivity (I don't have the slightest inkling how you would even begin to explain subjectivity with any more than story telling).
 
  • #92
Q_Goest said:
I think there’s a common desire for humans to believe that our feelings and emotions (our phenomenal experiences) actually make a difference. We want to believe we are not automatons, that we have “free will” and our experiences really matter. We intuitively feel that there is something different about our experience of the world and that of an automaton, and therefore, the computational paradigm must somehow be wrong.

That's not completely true. It goes the other way, as well. I posted a Human Behavioral Biology lecture series in the Biology forum (excellent series, you should really watch it if this kind of stuff interests you). The lecturer discusses the history of the debate between the southern US scientists, and the european marxist scientists at the time.

The US scientists were promoting this largely biosocial view in which everything was predetermined and wild nature and it's largely speculated that they had a political agenda to justify their behavior at the time. There was even an outbreak with angry Marxists shouting and screaming "There will be law!"

So there is an allure to the opposite effect which we have to be equally careful of. To take accountability away from criminals and tyrants, particularly (but I'm sure we've all, at some point, justified our own behavior in some small trivial way as "it's just who I am").
 
  • #93
Hi aperion, I honestly wish there was more to agree on. Anyway...
apeiron said:
The FEA approach you describe only works because the global constraints are taken as already in existence and so axiomatic. What does not change does not need to be mentioned when modelling.

So take a benard cell. An entropic gradient is presumed. The source and the sink are just there. The model does not seek to explain how this state of affairs developed, just what then happens as a consequence.

Order then arises at a critical temperature - the famous hexagonal cells. Local thermal jostlings magically become entrained in a global scale coherent motion.

Now these global scale cells do in fact exert a downward causal effect. As just said, they entrain the destinies of individual molecules of oil. This is what a dissipative structure is all about. Global constraints (the order of the flow) acting to reduce the local degrees of freedom (the random thermal jostle of the molecules become suddenly far less random, far more determined).

So benard cells are frequently cited as an example of self-organisation due to the "mysterious" development of global order.

There are other features we could remark on, like the fact that the whorls are hexagonal (roughly) rather than circular. The fact that the activity is confined (globally constrained) reduces even the "local degrees of freedom" of these benard cells. Circular vortexes are the unconstrained variety. Hexagonal ones are ones with extra global constraints enforced by a packing density.

Note too that the macro-order that the benard cell is so often used to illustrate is a highly delicate state. Turn the heat up a little and you have soon the usual transition to chaos proper - whorls of turbulence over all scales, and no more pretty hexagonal cells.
There are no "global constraints" in FEA unless you consider the local, causal influences "global". I honestly don't know what you mean by global constraints unless those are the boundary conditions on the overall physical system. See, we can just as easily extend the boundary in FEA, such as by extending the liquid pool out past the area where it is being heated, to form Benard cells. When we do that, everything stays the same. The boundaries on every element have only the local, causal forces (momentum exchange, conservation of energy, conservation of mass, gravitational field strength, etc...) ascribed to those boundaries, and those boundaries on every volume must be in dynamic equilibrium with every other volume and with the boundary on the system being modeled overall as if the boundary overall was just another layer of finite elements. FEA is truly an example of weak emergence as Bedau describes it in his paper.
 
  • #94
Hi Pythagorean
Pythagorean said:
I don't see what "downward causation" really means. Physically, it doesn't seem any different from constraints.
If you're not familiar with the term "downward causation", please read up on the topic. Here's a couple of papers I can recommend:
Chalmers, "Strong and Weak Emergence"
Emmeche et al, "Levels, Emergence, and Three Versions of Downward Causation"

From Chalmers
Downward causation means that higher-level phenomena are not only irreducible but also exert a causal efficacy of some sort. Such causation requires the formulation of basic principals which state that when certain high-level configurations occur, certain consequences will follow.

From Emmeche, strong downward causation is described as follows:
a given entity or process on a given level may causally inflict changes or effects on entities or processes on a lower level. ... This idea requires that the levels in question be sharply distinguished and autonomous...

Basically, it's saying that local level physical laws are over-ridden by these other physical laws that arise when certain higher level phenomena occur. I'm not sure what way "constraints" is being used in some of the contexts used here, but certainly, "strong downward causation" is something well defined and largely dismissed as being to much like "magic". Strong downward causation is largely refuted by everyone, at least on a 'classical' scale. There are some interesting concepts close to this that might apply at a molecular level, but that for another day.
 
  • #95
Q_Goest said:
There are no "global constraints" in FEA unless you consider the local, causal influences "global". I honestly don't know what you mean by global constraints unless those are the boundary conditions on the overall physical system. See, we can just as easily extend the boundary in FEA, such as by extending the liquid pool out past the area where it is being heated, to form Benard cells. When we do that, everything stays the same. The boundaries on every element have only the local, causal forces (momentum exchange, conservation of energy, conservation of mass, gravitational field strength, etc...) ascribed to those boundaries, and those boundaries on every volume must be in dynamic equilibrium with every other volume and with the boundary on the system being modeled overall as if the boundary overall was just another layer of finite elements. FEA is truly an example of weak emergence as Bedau describes it in his paper.

Yes, I am sure there is no way to change your mind here. But anyway, boundary conditions would be another name for global constraints of course.

Immediately, when challenged, you think about the way those boundary conditions can be changed without creating a change. Which defeats the whole purpose. The person making the change is not factored into your model as a boundary condition. And you started with a system already at equilibrium with its boundary conditions and found a way to move them so as not to change anything. (Well, expand the boundary too fast and it would cool and the cells would fall apart - but your imagination has already found a way not to have that happen because I am sure your experimenter has skillful control and does the job so smoothly that the cells never get destabilised).

So FEA as a perspective may see no global constraints. Which is no problem for certain classes of modelling, a big problem if you are using it as the worldview that motivates your philosophical arguments here.

And as I said, a big problem even if you just want to model complex systems such as life and mind.

As asked, I provided examples of how top-down constraints such as selective attention have been shown to alter local neural receptive fields and other aspects of their behaviour. You have yet to explain how this fits with your FEA perspective where this kind of hierarchical causality does not appear to exist.
 
  • #96
nismaratwork said:
Hmmmm... I like it... any good reading you could recommend?

Nonlinear optics: past, present, and future
Bloembergen, N.

Is what I found to answer your question (looking mostly at the history) which might be a goodd background to go and find your particular interests from. I think it really depends on your specific interest, but I've very little exposure to nonlinear optics.
 
  • #97
Q_Goest said:
Hi Pythagorean

If you're not familiar with the term "downward causation", please read up on the topic. Here's a couple of papers I can recommend:
Chalmers, "Strong and Weak Emergence"
Emmeche et al, "Levels, Emergence, and Three Versions of Downward Causation"

From ChalmersFrom Emmeche, strong downward causation is described as follows: Basically, it's saying that local level physical laws are over-ridden by these other physical laws that arise when certain higher level phenomena occur. I'm not sure what way "constraints" is being used in some of the contexts used here, but certainly, "strong downward causation" is something well defined and largely dismissed as being to much like "magic". Strong downward causation is largely refuted by everyone, at least on a 'classical' scale. There are some interesting concepts close to this that might apply at a molecular level, but that for another day.

Yes, I've seen the definitions, but my point was I guess, that I stand along side the people that think it's "magic". It seems rather mystical to me, which means either I don't understand it or it's bs. I chose to say I didn't understand it, I didn't mean that I didn't know the definition.

I can definitely accept that there's global behavior that doesn't occur at smaller scales (a water molecule does not manifest a wave). I work in systems that can be considered weakly emergent. It seems to me that it would take omniscience to judge strong emergence. Or a really simple and perfectly closed system (but then you're chance of even weak emergence dwindles). Otherwise you're ignoring the rather high probability (as dictated by history) that there's another aspect ("hidden variable"). It will take a lot of evidence to rule out the higher probability.
 
Last edited:
  • #98
Q_Goest said:
Basically, it's saying that local level physical laws are over-ridden by these other physical laws that arise when certain higher level phenomena occur. I'm not sure what way "constraints" is being used in some of the contexts used here, but certainly, "strong downward causation" is something well defined and largely dismissed as being to much like "magic". Strong downward causation is largely refuted by everyone, at least on a 'classical' scale. There are some interesting concepts close to this that might apply at a molecular level, but that for another day.

Chalmers and others might like to stress irreducibility, but that is not actually what I've been saying at all.

The argument is instead that both local and global causes are reducible to "something else". Which is where Peirce's logic of vagueness, etc, comes in.

So Q Goest is presenting sources and ideas he is familiar with, not the ones I am employing.
 
  • #99
apeiron said:
Chalmers and others might like to stress irreducibility, but that is not actually what I've been saying at all.

The argument is instead that both local and global causes are reducible to "something else". Which is where Peirce's logic of vagueness, etc, comes in.

So Q Goest is presenting sources and ideas he is familiar with, not the ones I am employing.

Good to know; this is what I mean by "hidden variable" but "variable" is too specific of a word and is attached to an irrelevant history. But this appears like weak emergence to me; I had the impression you were a proponent of strong emergence.
 
  • #100
Pythagorean said:
But this appears like weak emergence to me; I had the impression you were a proponent of strong emergence.

How much stronger can you get in saying everything emerges?

So mine is super-emergence. The uber, premium brand stuff! None of this namby pamby so-called strong stuff, let alone alone the wilting weak, that others want to palm off on you.
 
  • #101
Ok, but what doe that mean functionally? I don't deny that, for instance, the star dust that makes us up was generated by gigantic thermodynamic processes.

But all I see is a bigger, possibly recursive, chain of weak emergent events.
 
  • #102
Pythagorean said:
Ok, but what doe that mean functionally? I don't deny that, for instance, the star dust that makes us up was generated by gigantic thermodynamic processes.

But all I see is a bigger, possibly recursive, chain of weak emergent events.

Well, this is a whole other thread if you want to start it. And I've given about 10k references already in many previous threads you have been involved in.

But you can check this thread I created on the notion of vague beginnings.

https://www.physicsforums.com/showthread.php?t=301514&highlight=vagueness
 
  • #103
Pythagorean said:
Nonlinear optics: past, present, and future
Bloembergen, N.

Is what I found to answer your question (looking mostly at the history) which might be a goodd background to go and find your particular interests from. I think it really depends on your specific interest, but I've very little exposure to nonlinear optics.

Exposure... heh... I'll get on that, thanks Pythagorean!
 
  • #104
apeiron said:
How much stronger can you get in saying everything emerges?

So mine is super-emergence. The uber, premium brand stuff! None of this namby pamby so-called strong stuff, let alone alone the wilting weak, that others want to palm off on you.



The fundamental 'stuff' that everything is supposed to emerge from is still missing. For more than 100 years scientists have been failing to identify anything that resembles fundamental building blocks from which matter, time and space emerge(i reject strings and loops as idle speculation at this time and wavefunctions and Hilbert spaces as too ill-defined and ambigous mathematical tricks). One begins to wonder if the hidden variables lie within reality at all. If causality(this is crucial for the "free-will vs pre-determination" debate) is proven to be not fundamental(there are good hints that it is not), science as a tool for discovering truths goes out the window completely. A whole plethora of top physicists indulged in mysticism because of this in the 1950's and 60's and the sad thing is progress on this issue has stalled.
 
Last edited:
  • #105
Pythagorean said:
I don't see what "downward causation" really means. Physically, it doesn't seem any different from constraints.

Here is a good primer on downward causation (the whole site is a good one)...
http://pespmc1.vub.ac.be/DOWNCAUS.HTML

Downward causation can be defined as a converse of the reductionist principle above: the behavior of the parts (down) is determined by the behavior of the whole (up), so determination moves downward instead of upward. The difference is that determination is not complete. This makes it possible to formulate a clear systemic stance, without lapsing into either the extremes of reductionism or of holism

Heylighen also wrote a good review paper on complexity and philosophy...
http://cogprints.org/4847/1/ComplexityPhilosophy.doc.pdf

Also the concept of emergent property receives a more solid definition via the ideas of constraint and downward causation. Systems that through their coupling form a supersystem are constrained: they can no longer act as if they are independent from the others; the supersystem imposes a certain coherence or coordination on its components. This means that not only is the behavior of the whole determined by the properties of its parts (“upwards causation”), but the behavior of the parts is to some degree constrained by the properties of the whole (“downward causation” (Campbell, 1974)).

John Collier wrote about downward causation in Benard cells...
http://www.kli.ac.at/theorylab/jdc/papers/BC-ECHOIV.pdf

The understanding we have of Bénard cells,
including the careful analysis by Chandresekhar
(1961) assumes the convecting state, and compares
that with the conducting state to derive the critical
Rayleigh number. There is no known way to derive
the convecting state from the microscopic movements
of the molecules or local fluid dynamics. It is very
likely impossible, a) due to the impossibility of
solving the equations of motion, and b) likely chaotic
at the molecular level. The Bénard cells are genuinely
emergent in the sense of Collier and Muller (1998).
 
Last edited by a moderator:

Similar threads

  • General Discussion
6
Replies
190
Views
9K
Replies
14
Views
7K
Replies
2
Views
2K
Replies
12
Views
1K
  • Quantum Interpretations and Foundations
Replies
2
Views
768
  • General Discussion
6
Replies
199
Views
31K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
Replies
6
Views
740
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Quantum Interpretations and Foundations
7
Replies
213
Views
10K
Back
Top