Neural correlates of free will

In summary, Benjamin Libet's work suggests that our decisions to act occur before our conscious awareness of them. This problem for the idea of free will is that it seems to imply an either/or battle between determinism and free will. Some people might try adopting the approach that the neurological correlates of free will are deterministic (if one does wish to adopt a kind of dualistic picture where all that is physical is deterministic and free will is housed in some extra-physical seat of conscious choice). Others might look critically at the very assumption that physically identifiable processes are deterministic in some "absolutely true" way, such that they could preclude a concept of free will.
  • #36
Q_Goest said:
I’m not familiar with “holonomic” so I did a search:

It sounds like Pattee wants simply wants these macromolecules and genetics to have a stronger causal role in evolution but I'm not sure exactly what he's getting at. Perhaps you could start a new thread regarding Pattee and his contributions to philosophy and science.

Given that Pattee is an excellent cite for the systems view, this is certainly the right place to mention him :rolleyes:.

What he is talking about here is the symbol grounding issue - how non-holonomic constraints can actually arise in the natural world. Genetic information has to make itself separate from what it controls to be able to stand as a level of top-down control.

Sure, Baranger's paper is pretty basic, but it clearly makes the point that chaotic systems are deterministic given precise initial conditions, which is relevant to the OP.

That was hardly the thrust of the paper. And the more correct statement is that chaotic systems (such as the weather) can be modelled using the mathematical tools of deterministic chaos. This is different from the claim that the weather, or any other system is deterministically chaotic in the ontic sense.

So sure, the models behave a certain way - unfold mechanistically from their initial conditions. And it certainly resembles the observables of real world systems like the weather. But we also know that the models depend on unrealistic assumptions (such as a real world ability to measure initial conditions with complete accuracy).

From a philosophical view, you just can't jump from "looks like" to "is". Especially when you know there are ways that "it isn't".

I think it’s important also to separate out chaotic systems that are classical (and separable) in a functional sense, such as Benard cells, from systems that are functionally dependant on quantum scale interactions. Our present day paradigm for neuron interactions is that they are dependent on quantum scale interactions, so it seems to me one needs to address the issue of how one is to model these “non-holomonic” properties (classical or quantum mechanical influences) and whether or not such a separation should make any difference.

Pardon me? Did you just suggest that a QM basis to neural function was mainstream?

This is a good example of what confuses me about everything you say about this "systems approach". Are you suggesting these "top-down constraints" are somehow influencing and subordinating local causation? That is, are you suggesting that causes found on the local level (such as individual neuron interactions) are somehow being influenced by the top down constraints such that the neurons are influenced not only by local interactions, but also by some kind of overall, global configuration?

What I've said is that global constraints act top-down to restrict local degrees of freedom. So that in a strong sense does create what is there are the local scale. Of course the logic is interactive. It is a systems approach. So the now focused degrees of freedom that remain must in turn construct the global scale (that is making them).

This is how brains work. A neuron has many degrees of freedom. A particular neuron (in a baby's brain, or other unconstrained state) will fire to just about anything. But when a global state of attention prevails, the firing of that neuron becomes highly constrained. It becomes vigorous only in response to much more specific inputs. This is a very basic fact of electrophysiology studies.

So it is not just a theory, it is an observed fact. And yes, this is not the way machines work in general.

After rereading his paper, I’d say that he does in fact try to separate mental states (phenomenal states) from the underlying physical states as you say, but that mental states are epiphenomenal isn’t an unusual position for computationalists. Frank Jackson for example (Epiphenomenal Qualia) is a much cited paper that contends exactly that. So I’d say Farkus is in line with many philosophers on this account. He's suggesting mental states ARE physical states, and it is the mental properties that are "causally irrelevant" and an epiphenomenon (using his words) which I’d say is not unusual in the philosophical community.

I'm not holding up the Farkus paper as a shining example of the systems view. As I made plain, it was just what I happened to be reading that day and my remark was here is another reinventing the wheel.

But I think you are also reading your own beliefs into the words here.

Not that there aren’t logical problems with that approach. He states for example:

That says to me, he accepts that neurons only interact locally with others but we can also examine interactions at higher levels, those that are defined by large groups of neurons.

I don't see the issue. This is the standard view of hierarchy theory. Except you introduced the word only here to suggest Farkus meant that there are not also the local~global interactions that make the brain a system.

There are some areas in his paper I’m not too sure about. Take for example:

If he’s suggesting that this “higher level” is not determined by the interactions of the lower level (their interactions) in a deterministic way based only on the local interactions of neurons, then that sounds like strong downward causation which is clearly false. Certainly, there are people who would contend that something like that would be required for “free will” or any theory of mental causation. But I’m not sure that’s really what he wants.

What he says is that you have two things going on. The higher level has a long-run memory which causes what we might call its persistent state. Then it is also responding to the input coming from below, so its state is also "caused" by that.

If you dig out Stephen Grossberg's neural net papers, or Friston's more recent Bayseian brain papers, you will get a much more elegant view. Yet one with the same essential logic.

In another questionable section he states:

In the part emphasized, I’d say he’s trying to suggest that a person is somehow “immediately” and “simultaneously” affected by a “global state” on entering this classroom which I picture as being a zone of influence of some sort per Farkus. Were the same person to enter the same room and was blind and deaf, would these same “global states” immediately and simultaneously also affect that person? Sounds like Farkus wants his readers to believe that also, but that sounds too much like magic to me.

Surely he is just using an analogy and not suggesting that psi is involved :rofl:. Why would his explicit claim that a person "senses" the atmosphere be read instead as a claim that a person who could not sense (being blind and deaf) would still sense?

All he is saying is that there is an ambient emotional state in the classroom - a generally shared state averaged across a connected set of people. Any newcomer then will respond to this globally constraining atmosphere.

I think this is a good lead into strong emergence and strong downward causation which, in one way or another, is necessary for mental causation and free will. The question really is, can the higher physical levels somehow subordinate the local interactions of neurons? And if so, how?

Excellent. But there are so many thousands of papers on the neuroscience of top-down attentional effects on neural receptive fields that it is hard to know where to start.

Here is a pop account with some useful illustrations.
http://www.sciencedaily.com/releases/2009/03/090325132326.htm

Here is a rather general review.
http://pbs.jhu.edu/bin/q/f/Yantis-CDPS-2008.pdf
 
Last edited by a moderator:
Physics news on Phys.org
  • #37
nismaratwork said:
Nonlinar dynamics includes nonlinear optics, right? (just clarifying for me here, not a leading question.)

It appears so. I'm not sure, though, if it's always dynamical just because it's nonlinear optics, but there are plenty of dynamical examples of nonlinear optics' what I've seen are "simple" systems, not "complex" systems. But the Kerr effect (part of the origin of nonlinear dynamics) certainly appears like a nonlinear dynamical event to my mind.

As for complexity, we can model a complex network of "cells" (functional partitions in a material) as a system responding to the injected energy (the electromagnetic source) and talk about how the information propagates through the system and that's a complex dynamical system.
 
  • #38
Ken G said:
My point about nonlinear dynamics in general is that it starts with a kind of fiction, which is that the system has "a state." Mathematically, if we have nonlinear dynamics, and start at a state, we have deterministic evolution that obeys sensitivity to initial conditions. However, if we don't actually have a state, but instead a collection of states, involving some uncertainty, then our initial uncertainty grows with time. Mathematically, we would still call that deterministic, because we have a bundle of deterministic trajectories that fan out and cover most or all of the accessible phase space. But physically, if we have an initial uncertainty that grows, we cannot call that deterministic evolution, because we cannot determine the outcome. Hence, if we cannot assert that the reality begins in "a state", we cannot say that its future is determined either. Rather, we see determinism for what it is-- a gray scale of varying degree of predictability, not an absolute state of how things evolve.

The Catch-22 of chaotic systems is we cannot demonstrate that the system does begin in a state other than a state of uncertainty, nothing else is actually demonstrable. It is purely a kind of misplaced faith in a mathematical model that tells us a macroscopic system actually has a state. Even quantum mechanically, a macro system is treated as a mixed state, which is of course not distinguishable from an uncertain state (and here I do not refer to the Heisenberg uncertainty of pure states, but the garden variety uncertainty of mixed states).

Well, sure, don't confuse the model with reality; that's always a good thing to remember.
 
  • #39
Pythagorean said:
It appears so. I'm not sure, though, if it's always dynamical just because it's nonlinear optics, but there are plenty of dynamical examples of nonlinear optics' what I've seen are "simple" systems, not "complex" systems. But the Kerr effect (part of the origin of nonlinear dynamics) certainly appears like a nonlinear dynamical event to my mind.

As for complexity, we can model a complex network of "cells" (functional partitions in a material) as a system responding to the injected energy (the electromagnetic source) and talk about how the information propagates through the system and that's a complex dynamical system.

Hmmmm... I like it... any good reading you could recommend?
 
  • #40
Ken G said:
I think we are largely in agreement.
I think so, but for one subtil choice of wording and that's maybe not so trivial.

Ken G said:
no mathematical definition can tell you something about free will other than whether or not the mathematical definition is a useful replacement for free will (...) unless one adopts the weak meaning that anything that is usefully replaced by a determinist model is what we mean by "deterministic" when applied to a real thing.
This is well said and I largely agree, but for one word: to we what you call weak meaning is what I'd call physical or scientific meaning, and all other kind of meaning I'd call it either metaphysical, weak, or boring -depends on the mood ;-)

Ken G said:
all too easy to say, "according to the mathematician."
What is easy is not necessarly wrong :biggrin:
 
  • #41
Ken G said:
The Catch-22 of chaotic systems is we cannot demonstrate that the system does begin in a state other than a state of uncertainty
Do you see of deep difference with non chaotic systems?
 
  • #42
Lievo said:
This is well said and I largely agree, but for one word: to we what you call weak meaning is what I'd call physical or scientific meaning, and all other kind of meaning I'd call it either metaphysical, weak, or boring -depends on the mood ;-)
I can accept that, it's just that in my experience, that scientific meaning often leaks over into a kind of metaphysical meaning without even realizing it. It's like the way many people claim to take a "shut up and calculate" approach to physics-- until they don't.
 
  • #43
Lievo said:
Do you see of deep difference with non chaotic systems?
The difference is in how much it matters. For convergent trajectories, whether in reality there exists an exact initial state, or just a reasonably precise one, makes little difference when held to the lens of determinism. In chaotic systems, there's a huge difference in the practice of prediction, which is the operational meaning of determinism. So chaotic systems teach us, not that determinism leads to the butterfly effect, but that the butterfly effect challenges the very meaning of determinism.
 
  • #44
Ken G said:
My point about nonlinear dynamics in general is that it starts with a kind of fiction, which is that the system has "a state." Mathematically, if we have nonlinear dynamics, and start at a state, we have deterministic evolution that obeys sensitivity to initial conditions. However, if we don't actually have a state, but instead a collection of states, involving some uncertainty, then our initial uncertainty grows with time. Mathematically, we would still call that deterministic, because we have a bundle of deterministic trajectories that fan out and cover most or all of the accessible phase space. But physically, if we have an initial uncertainty that grows, we cannot call that deterministic evolution, because we cannot determine the outcome. Hence, if we cannot assert that the reality begins in "a state", we cannot say that its future is determined either. Rather, we see determinism for what it is-- a gray scale of varying degree of predictability, not an absolute state of how things evolve.

The Catch-22 of chaotic systems is we cannot demonstrate that the system does begin in a state other than a state of uncertainty, nothing else is actually demonstrable. It is purely a kind of misplaced faith in a mathematical model that tells us a macroscopic system actually has a state. Even quantum mechanically, a macro system is treated as a mixed state, which is of course not distinguishable from an uncertain state (and here I do not refer to the Heisenberg uncertainty of pure states, but the garden variety uncertainty of mixed states).

But isn't the collective whole of all the macroscopic systems actually one big deterministic one at the lowest level? Seems to me it becomes a matter of scope and not an inherent non determinism in the system as a whole. Also just for hypothesizing; wouldn't the universe during the big bang start out with almost no separated states and then evolve into many different states and thus create increased separation of systems, but ultimately be one big system?
 
  • #45
octelcogopod said:
But isn't the collective whole of all the macroscopic systems actually one big deterministic one at the lowest level? Seems to me it becomes a matter of scope and not an inherent non determinism in the system as a whole. Also just for hypothesizing; wouldn't the universe during the big bang start out with almost no separated states and then evolve into many different states and thus create increased separation of systems, but ultimately be one big system?

That would seem to depend on which Interpretation of QM one ascribes to.
 
  • #46
Ken G said:
scientific meaning often leaks over into a kind of metaphysical meaning without even realizing it
I couldn't agree more. I'm not sure it really makes a difference for physicists, but in "softer" sciences this has an impact. One example is uncautious naming. ABO gene is of course the gene for ABO phenotype, right? But once we name it this way, it's even becoming hard to remember that it may or may not be implicated in functions that has nothing to do with ABO phenotype. Not to speak that I've just mentioned "gene" as if this was something tangible rather than something we define after we find it's associated with some function.

Ken G said:
In chaotic systems, there's a huge difference in the practice of prediction, which is the operational meaning of determinism. So chaotic systems teach us, not that determinism leads to the butterfly effect, but that the butterfly effect challenges the very meaning of determinism.
I understand your point, but you're going to far by thinking butterfly effect really introduce something new. As you know butterfly effect means that small perturbations can quickly becomes large perturbation. But this is an experimental prediction in itself, and in itself it is testable!

Of course what you actually underline is that there exists "bad questions" that can't be answered such as what will be the future state of a chaotic system given a known present state?. Theory says this is a bad question and all you can do is to assign probabilites to different possible outcomes. Maybe you can interpret that as a challenge to determinism, but the fact is that the existence of "bad questions" is not new. For example if you want to know both position and speed of a particule with a better precision than what Heisenberg allows, this is easy to ask and impossible to answer. The theory says you can't know that, and this interdiction is an experimental prediction in itself.

So, sometime the prediction is that the prediction you want is not something reality allows you to know. Is it a challenge to determinism? Maybe, but certainly that's not specific to chaos. To me this just shows that language allows us to ask a larger set of questions than the set of all meaningfull questions.
 
  • #47
The claim that there might be some kind of randomness to nature, at any level, doesn't provide for free will. There has to be some form of strong downward causation which first requires the system to be nonseparable, and then it requires the whole of the system to intervene or subordinate the physical state of some part of that system. That might describe molecular systems but it doesn't describe the conventional view of neuron interactions.
 
  • #48
Q_Goest said:
There has to be some form of strong downward causation which first requires the system to be nonseparable, and then it requires the whole of the system to intervene or subordinate the physical state of some part of that system.
What is the evidences for thinking that this is a conventional view?
 
  • #49
Ken G said:
I can accept that, it's just that in my experience, that scientific meaning often leaks over into a kind of metaphysical meaning without even realizing it. It's like the way many people claim to take a "shut up and calculate" approach to physics-- until they don't.
While thinking at it, there is a very similar question in mathematics (Platonicist vs Formalist interpretation). I'd be interested in your position on this.
 
Last edited:
  • #50
Lievo said:
What is the evidences for thinking that this is a conventional view?
Good question. The computationalist crowd including Chalmers, Christly, Copeland, Endicot, many others, and those not in the computationalist crowd such as Maudlin, Bishop, Putnam etc... all argue over the issue of separability as discussed to some degree in my other thread regarding Zuboff's paper. If a system is separable, it's not holistic in any sense that allows for the emergence of a phenomenon like consciousness.

But a nonseparable system isn't enough to allow for that system to have a "choice" in the sense that the strongly emergent phenomena can have control over the constituents in any meaningful way. There's a lot of discussion in the past decade or two about emergence and how it applies to mental causation. If there is only local causation, then even if you have some kind of holistic, emergent phenomenon, it can have no influence over the constituent parts.
 
  • #51
Q_Goest said:
all argue over the issue of separability as discussed to some degree in my other thread regarding Zuboff's paper.
This may be evidence that the question was considered important, but of course this is not an evidence that the tentative solution you pointed is conventionnal view :wink:

Q_Goest said:
If there is only local causation, then even if you have some kind of holistic, emergent phenomenon, it can have no influence over the constituent parts.
Don't you think these http://en.wikipedia.org/wiki/Gun_(cellular_automaton)" challenge this view?
 
Last edited by a moderator:
  • #52
Lievo said:
While tinking at it, there is a very similar question in mathematics (Platonicist vs Formalist interpretation). I'd be interested in your position on this.
Yes, this would seem to be the key issue indeed. Personally, I am heavily swayed by the formalist approach. I feel that there is an important difference between logical truth, which is rigorous and syntactic, and semantic meaning, which is experiential and nonrigorous. Mathematics is primarily about the former, because of its reliance on proof, and physics is primarily about the latter, because of its reliance on experiment. Why the two find so much common ground must be viewed as the deepest mystery in the philosophy of either, and I have no answer for it either, other than it seems there is some rule that says what happens will be accessible to analysis, but the analysis will not be what happens.

What's more, I think that Godel's proof drove a permanent wedge between certainty and meaning, such that the two must forever be regarded as something different.

In the issue of free will, one must then ask if free will is housed primarily in the abstract realm of syntactic relationships, where lives concepts like determinism, or primarily in the experiential realm of what it feels like to be conscious, where lives human perception and experience. To me, it must be placed in the latter arena, which is why I think the whole issue of determinism vs. free will is a category error.
 
  • #53
Q_Goest said:
If there is only local causation, then even if you have some kind of holistic, emergent phenomenon, it can have no influence over the constituent parts.
But again one must ask, what is it that has only local causation, the brain, or the model of the brain that you are using for some specific purpose? It is important not to confuse the two, or you run into that Catch-22: you can never argue that the brain does not give rise to holistic emergent phenomena on the basis that you can successfully model the brain using local causation, because if local causation cannot lead to holistic emergent phenomena that influence the parts, then you may simply be building a model that cannot do what you are then using the model to try and argue the brain cannot do. In other words, you are telling us about the capabilities of models, not the capabilities of brains.
 
  • #54
Lievo said:
Don't you think these http://en.wikipedia.org/wiki/Gun_(cellular_automaton)" challenge this view?
Cellular automata were given as a prime example of "weak emergence" by http://www.google.com/search?hl=en&...k+emergence&aq=f&aqi=g1&aql=&oq=&safe=active". His paper is fairly popular, being cited over 200 times. Bottom line, cellular automata are weakly emergent. They are separable and have no "downward" causal affects on local elements.
 
Last edited by a moderator:
  • #55
Q_Goest said:
That might describe molecular systems but it doesn't describe the conventional view of neuron interactions.

But I just gave you references that show it IS the conventional view of neuron interactions.

The brain operates as an anticipatory machine. It predicts its next state globally and then reacts to the localised exceptions that turn up. Like Farkus's analogy of a classroom with an atmosphere, there is a running state that is the active context for what happens next.

This is completely explicit in neural network models such as Grossberg and Friston's.

So you are just wrong about the conventional view in neuroscience.

New Scientist did a good piece on Friston...
http://reverendbayes.wordpress.com/2008/05/29/bayesian-theory-in-new-scientist/
 
  • #56
Ken G said:
But again one must ask, what is it that has only local causation, the brain, or the model of the brain that you are using for some specific purpose? It is important not to confuse the two, or you run into that Catch-22: you can never argue that the brain does not give rise to holistic emergent phenomena on the basis that you can successfully model the brain using local causation, because if local causation cannot lead to holistic emergent phenomena that influence the parts, then you may simply be building a model that cannot do what you are then using the model to try and argue the brain cannot do. In other words, you are telling us about the capabilities of models, not the capabilities of brains.
But the brain IS modeled using typical FEA type computational programs. They can use the Hodgkin Huxley model or any other compartment method, of which there are a handful. FEA is an example of the philosophy of separable systems reduced to finite (linear) elements. Nevertheless, as I'd mentioned to aperion, FEA is just a simplificaiton of a full differential formulation as I've described in the library https://www.physicsforums.com/library.php?do=view_item&itemid=365". FEA and multiphysics software is a widespread example of the use of computations that functionally duplicate (classical) physical systems. Even highly dynamic ones such as Benard cells, wing flutter, aircraft crashing into buildings and even the brain are all modeled successfully using this approach. That isn't to say that FEA is a perfect duplication for a full differential formulation of every point in space. It's obviously not. However, the basic philosophical concept that leads us to FEA (ie: that all elements are in dynamic equilibrium at their boundaries) is the same basic philosophy that science and engineering use for brains and other classical physical systems.
 
Last edited by a moderator:
  • #57
Q_Goest said:
Cellular automata were given as a prime example of "weak emergence" by http://www.google.com/search?hl=en&...k+emergence&aq=f&aqi=g1&aql=&oq=&safe=active". His paper is fairly popular, being cited over 200 times. Bottom line, cellular automata are weakly emergent. They are separable and have no "downward" causal affects on local elements.

Agreed, the very definition of Turing Computation is that there is no top-down causation involved, only local or efficient cause.

The programmable computer is the perfect "machine". Its operations are designed to be deterministic.

And there are now so many computers and computer scientists in society that people are coming to believe reality is also "computational". They can no longer imagine other forms of more complex causality it seems.
 
Last edited by a moderator:
  • #58
Hi aperion,
apeiron said:
But I just gave you references that show it IS the conventional view of neuron interactions.
Hopefully my last post addresses this. Neuron interactions are philosophically treated using compartmental methods as described in my last post. If that's not what you feel is pertinant to the issue of separability, please point out specifically what you wish to address.
 
  • #59
apeiron said:
And there are now so many computers and computer scientists in society that people are coming to believe reality is also "computational". They can no longer imagine other forms of more complex causality it seems.
But it's not just "computer scientists". People are using that basic philosophy (computational/FEA) for everything now (at a classical scale). We can't live without it because it's so powerfully predictive.
 
  • #60
apeiron said:
Pardon me? Did you just suggest that a QM basis to neural function was mainstream?
Sorry! Thanks for pointing that out. I fixed it (edited it). Of course, QM is NOT the basis for neural function is what I meant.
 
  • #61
Q_Goest said:
But the brain IS modeled using typical FEA type computational programs.
And that is my point. You are saying "the brain is modeled X". Then you say "X cannot do Y." Then you say "thus Y cannot be important in understanding the brain." That is the Catch-22, if you build a model that cannot do something, you can't then blame the brain on this inability. The model may do many things the brain does, so may be a good model of a brain, but one cannot reason from the model to the thing, that's essentially the fallacy of reasoning by analogy.

FEA and multiphysics software is a widespread example of the use of computations that functionally duplicate (classical) physical systems.
Ah, now there's an interesting turn of phrase, "functionally duplicate." What does that mean? It sounds like it should mean "acts in the same way that I intended the model to succeed at acting", but you sound like you are using it to mean "does everything the system does." That is what you cannot show, you cannot make a model for a specific purpose, demonstrate the model succeeds at your purpose, and use that success to rule out every other purpose a different model might be intended to address-- which is pretty much just what you seem to be trying to do, if I understand you correctly.

Even highly dynamic ones such as Benard cells, wing flutter, aircraft crashing into buildings and even the brain are all modeled successfully using this approach.
Certainly, "modeled successfully." Now what does that mean? It means you accomplished your goals by the model, which is all very good, but it does not mean you can then turn around and use the model to obtain orthogonal information about what you are modeling. Just what constitutes "orthogonal information" is a very difficult issue, and I don't even know of a formal means to analyze how we could tell that, other than trial and error.
However, the basic philosophical concept that leads us to FEA (ie: that all elements are in dynamic equilibrium at their boundaries) is the same basic philosophy that science and engineering use for brains and other classical physical systems.
But so what?
 
  • #62
apeiron said:
The programmable computer is the perfect "machine". Its operations are designed to be deterministic.

And there are now so many computers and computer scientists in society that people are coming to believe reality is also "computational".
This is the perspective I am also in agreement with. We are seeing a failure to distinguish the goals of a model from the thing that is being modeled. I see this error in lots of places, it was made many times in the history of science. When Newton came up with a sensational model of motion, very unified and highly predictive, people said "so that's how reality works." They then reached all kinds of conclusions about what reality could and could not do, none of which were worth the paper they were written on when other models deposed Newton's. The point is, that is just reversed logic-- we don't use models to tell us what systems can do, we use systems to tell us what we are trying to get models to do, and we choose the latter. The choice of what we want to answer determines the model, the models shouldn't tell us what we should want to answer.
 
  • #63
Q_Goest said:
But it's not just "computer scientists". People are using that basic philosophy (computational/FEA) for everything now (at a classical scale). We can't live without it because it's so powerfully predictive.

Or powerfully seductive :rolleyes:.

I am not denying that the computational view is powerful. It is great for technology - for achieving control over the world. But it makes a myopic basis for philosophy as it leaves out the other sources of causality. And you won't find many biologists or neurologists who believe that it is the truth of their systems.

In your sphere of work, FEA is a completely adequate intellectual tool you say. But you are just wrong when you say it is the way neuroscientists think about brains.

Can't you see how FEA is set up to remove questions of downward constraint from the analysis?

In the real world, systems have pressures and temperatures due to some global history. There is a context that causes the equilibrium results. But then FEA comes into this active situation and throws an infinitely thin 2D measuring surface around a volume. A surface designed not to affect the dynamics in any fashion - any top-down fashion! The surface is designed only to record the local degrees of freedom, not change them. You have even specified that the measurements are classical, because you know there is a quantum limit to how thin and non-interactive, yet still measuring, such a surface can be.

So what you are claiming as an ontological result (reality is computational) is just simply a product of your epistemology (you are measuring only local causes). And the map is not the terrain.

You are claiming that FEA is useful (nay, powerfully predictive) in modelling brains. So can you supply references where neuroscientists have employed FEA to deliver brilliantly insightful results? Where are the papers that now back up your claim?
 
  • #64
Q_Goest said:
cellular automata are weakly emergent. They are separable and have no "downward" causal affects on local elements.
Don't you think these http://en.wikipedia.org/wiki/Gun_(cellular_automaton)" challenge this view?
 
Last edited by a moderator:
  • #65
Ken G said:
Personally, I am heavily swayed by the formalist approach. I feel that there is an important difference between logical truth, which is rigorous and syntactic, and semantic meaning, which is experiential and nonrigorous. Mathematics is primarily about the former, because of its reliance on proof, and physics is primarily about the latter, because of its reliance on experiment. Why the two find so much common ground must be viewed as the deepest mystery in the philosophy of either, and I have no answer for it either, other than it seems there is some rule that says what happens will be accessible to analysis, but the analysis will not be what happens.
Don't you think this a deep mystery only if one takes the formalist approach?
 
  • #66
Lievo said:
Don't you think this a deep mystery only if one takes the formalist approach?
Ah, interesting question. No doubt this is indeed the central basis of the Platonist idea that mathematical truths lie at the heart of reality, such that when we discover those truths, we are discovering reality. That was an easy stance to take in ancient times, but I would say that many of the discoveries of physics and math since then are leading us to see that stance as fundamentally naive. In physics, we had things like the discovery of general relativity, which calls into question just how "obvious" it is that the inertial path is a straight line in spacetime. Granted, that earlier view is replaced by an equally mathematical aesthetic, albeit a more abstract one, so one might say "see, it's still fundamentally mathematical, it's just different mathematics." To which I would respond, how many times are we allowed to say "OK, we were wrong before, but this time we have it right."

I would say that if we get a mathematical model, interpret it as the truth of the reality, and find time and again that it isn't, at some point we should just stop interpreting it as the truth of the reality. At which point, we begin to become amazed we did so well in the first place. In other words, I'm not sure which is more surprising, that our models are so accurate, or that a model can be so accurate and still not be true.

Then there's also the Godel proof, which is that in reasonably powerful and consistent mathematical systems, there have to be things that are true that cannot be proven from the finite axioms of that system (which means they cannot be proven at all, since any system for generating consistent axioms is itself a system of proof). This means there always has to be a permanent difference between what is true by meaning and what is provable by axiom. It may be a very esoteric and never-encountered difference, but it has to be there-- and I think that in fact the difference is not esoteric and is encountered constantly, which is why physics keeps changing.
 
  • #67
Ken G said:
Ah, now there's an interesting turn of phrase, "functionally duplicate." What does that mean? It sounds like it should mean "acts in the same way that I intended the model to succeed at acting", but you sound like you are using it to mean "does everything the system does."

In this regard, those who see deep philosophical truths in deterministic chaos need to remember the "dirty little secret" of truncation error in computational simulations of chaotic trajectories.

The shadowing lemma says the trajectories may stay "sufficiently close" for many practical purposes. So they can be functionally duplicate. But this is not the same as claming an ontological level duplication. The model is never actually replicating the system in the strict sense of philosophical determinism. Indeed, we know that it definitely isn't. Shadowing says only that the probability is high that the simulation will remain the vincinity of what it pretends to duplicate!

Here is an intro text on shadowing.

http://www.cmp.caltech.edu/~mcc/Chaos_Course/Lesson26/Shadowing.pdf
 
  • #68
The truncaton error is not significant. If I run the same deterministic system twice, I won't get different results because of truncation error. The two systems will have exactly the same fate.

Anyway, as Kens been saying. The computer models are tools for analysis, not deep philosophical statements.
 
  • #69
apeiron said:
Shadowing says only that the probability is high that the simulation will remain the vincinity of what it pretends to duplicate!

Here is an intro text on shadowing.

http://www.cmp.caltech.edu/~mcc/Chaos_Course/Lesson26/Shadowing.pdf
That is interesting. I interpret that as saying a computed trajectory can be viewed as an approximation of some true trajectory that the deterministic mathematical system does support, so it's not a complete fiction of the computation, but it is not necessarily the trajectory that every real system in the neighborhood would follow-- even if the mathematical model were a true rendition of reality. So this supports your point that we know deterministic models of chaotic systems cannot be the whole truth, even if we are inclined to believe they are close to the truth. The situation is even worse if we are inclined to be skeptical that there is any such thing as a "true trajectory" of a physically real system, let alone that a model can reproduce it consistently for long times.
 
  • #70
Pythagorean said:
The truncaton error is not significant. If I run the same deterministic system twice, I won't get different results because of truncation error. The two systems will have exactly the same fate.
That doesn't mean the truncation error is never significant, it means the truncation error is consistent, which is something different-- it might be consistently significant!
Anyway, as Kens been saying. The computer models are tools for analysis, not deep philosophical statements.
That is the key issue, yes.
 

Similar threads

Replies
190
Views
9K
Replies
14
Views
7K
Replies
2
Views
2K
Replies
12
Views
1K
  • Quantum Interpretations and Foundations
Replies
2
Views
780
  • General Discussion
6
Replies
199
Views
31K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
2K
Replies
6
Views
749
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Quantum Interpretations and Foundations
7
Replies
213
Views
10K
Back
Top