Does Neuroscience Challenge the Existence of Free Will?

  • Thread starter Thread starter Ken G
  • Start date Start date
  • Tags Tags
    Free will Neural
Click For Summary
The discussion centers on the implications of Benjamin Libet's research, which suggests that decisions occur in the brain before conscious awareness, raising questions about free will and determinism. Participants explore whether this indicates a conflict between determinism and free will, proposing that neurological processes may be deterministic while free will could exist in a non-physical realm. The conversation critiques the reductionist view that equates physical processes with determinism, arguing instead for a more nuanced understanding that includes complexity and chaos theory. The idea that conscious and unconscious processes are distinct is emphasized, with a call for a deeper exploration of how these processes interact in decision-making. The limitations of current neuroscience in fully understanding consciousness and free will are acknowledged, suggesting that a systems approach may be more effective than reductionist models. Overall, the debate highlights the complexity of free will, consciousness, and the deterministic nature of physical processes, advocating for a more integrated perspective that considers both neurological and philosophical dimensions.
  • #31
Lievo said:
Are you sure you don't mix-up my argument with those of someone else?
I did conflate your argument with Q_Goest, my apologies.
Didn't I explicitly said the same things? Again, my analogy says nothing about whether consciousness and free will are or are not determinist. That just shows that none are at the root of the problem while interpreting Libet's finding, because one can explicitly construct the same kind of result while evacuating both free will and determinism.
Yes, and I agree with you-- Libet's finding really doesn't say much about free will at all, it says something about how we come under the conscious impression of having free will. That might be something quite a bit different from free will, just as the conscious impression of getting burned by a stove is quite a bit different from the process of burning. I should not have taken issue with your comments, I think we are largely in agreement.
What's is important is that from these mathematical definitions we can infer whether this or that properties lead to predictions. If an aspect of the model cannot lead to prediction, then you have the mathematical guarantee that this properties is not important to care about. If it allows some prediction, then you can check reality to decide which kind of model can or cannot describe reality: with or without the property?
Yes, I agree, the purpose of the mathematics is to empower the predictions, not to identify the actual process. In fact, I would say the express purpose of a mathematical model is to replace the actual process with something that fits inside our heads. For some reason, this replacement often gets misconstrued as a complete description, missing the point that the whole purpose was not to provide a complete description.
From the mathematical definition of randomness, an informed guess is that either randomness isn't at the root of free will, or free will can account for nothing. From the mathematical definition of computability, you can infer that either free will is determinist or it allows hypercomputing
No, this is the point, no mathematical definition can tell you something about free will other than whether or not the mathematical definition is a useful replacement for free will. It certainly can't tell you if free will is determinist, unless one adopts the weak meaning that anything that is usefully replaced by a determinist model is what we mean by "deterministic" when applied to a real thing.

So if one find evidence for hypercomputing that'd be evidence against determinism. Notice hypercomputing doesn't mean unpredictability. It means extraordinary abilities. See Penrose for one who defends this line of though, and especially defends that mathematicians have superpowers.
An interesting tack, but all too easy to say, "according to the mathematician." An artist might say that artists have superpowers. My point here is only that there is no need to find evidence against determinism, the responsibility lies squarely on those who claim that determinism has something to do with free will, either for or against, to demonstrate that property.
 
Physics news on Phys.org
  • #32
Q_Goest said:
I understand what you're getting at, but chaotic systems are clearly defined as deterministic in the literature as I've quoted above.
But you see the error there right away, the word "defined" is inconsistent with the word "system." We don't define systems, we notice them. What we define are mathematical models of systems, but a model is never a system. If the literature is being lazy on this point, then it is really missing something important, perhaps along the lines of what apeiron is saying it is missing.

Yes, they are mathematically deterministic.
No, systems are not mathematically deterministic, because systems are not mathematics.
Are they physically deterministic?
That's the issue.
When looking at the 'weather' or any other fluid system for that matter, we use statistical mechanics to define the fluid's momentum, density, internal energy, etc... at any point and at any time, and to the degree those values are accurate, the model will make accurate predictions. The fact that a fluid's momentum is made up of an aggregate of molecules and those molecules are being lumped together means that we can never be perfectly accurate. But does that really matter?
That's indeed the question. Or the follow-on question, does it matter to whom, and in what way? I would say it all depends on the goals. I think those who make models sometimes seem to forget that they are making models for a reason, they have a goal, and that goal is never to describe completely that which they model, for a complete description is not a model at all, it is only the system itself.

So are you suggesting that physical determinism isn't possible because we can't know the micro states, or are you suggesting that there might be some kind strong emergence and thus a form of downward causation that subordinates local physical laws? Or are you suggesting such systems aren't deterministic for some other reason?
I'm suggesting that determinism is itself a construct, a mathematical idea, not necessarily applicable to real systems except that it makes a useful template to hold up to them-- just as all mathematical models of reality are useful templates. That's easy to state, but the issue in regard to free will is that we don't yet know what elements of free will we are even trying to model, so we cannot say whether or not determinism is a useful template to hold up to free will. We already have examples, in weather and in quantum mechanics, where determinism is not always a useful template, though it does have some applicability and some tendency to break down.
 
  • #33
Q_Goest,

"Not quantum" doesn't mean classical. Nonlinear dynamics and complex systems are modern physics; in my undergrad curriculum they are taught in the two-semester modern physics course, after quantum and relativity.

They do make use of classical physics (moreso than QM does, for instance) but they are not constrained by classical physics, especially because they allow for dissipative (and stochastic) processes.

Dissipiative processes in thermodynamics are irreversible. Moving through a conservative force, like gravity, your can completely recover your ground... in the real world we have friction: a dissipative process from which heat and entropy flow.

This all becomes very important in turbulence models, where heat dissipation and entropy are rampant among correlated deterministic behavior (and change the deterministic behavior that is chaotic, so it's hard to predict how small, random changes from heat dissipation can manifest large consequences)

On stochastic non-holonomic systems
N. K. Moshchuk and I. N. Sinitsyn
Journal of Applied Mathematics and Mechanics
Volume 54, Issue 2, 1990, Pages 174-182

CUMULANTS OF STOCHASTIC RESPONSE FOR A CLASS OF
SPECIAL NONHOLONOMIC SYSTEMS
Shang Mei and Zhang Yi
Chinese Physics
Vol 10 No 1, January 2001
 
  • #34
Pythagorean said:
Q_Goest,

"Not quantum" doesn't mean classical. Nonlinear dynamics and complex systems are modern physics; in my undergrad curriculum they are taught in the two-semester modern physics course, after quantum and relativity.

They do make use of classical physics (moreso than QM does, for instance) but they are not constrained by classical physics, especially because they allow for dissipative (and stochastic) processes.

Dissipiative processes in thermodynamics are irreversible (that is one of the physical meanings of non-holonomic). Moving through a conservative force, like gravity, your can completely recover your ground... except for in the real world we have friction: a dissipative process.

This all becomes very important in turbulence models, where heat dissipation and entropy are rampant among correlated deterministic behavior (and change the deterministic behavior that is chaotic, so it's hard to predict how small, random changes from heat dissipation can manifest large consequences)

On stochastic non-holonomic systems
N. K. Moshchuk and I. N. Sinitsyn
Journal of Applied Mathematics and Mechanics
Volume 54, Issue 2, 1990, Pages 174-182

CUMULANTS OF STOCHASTIC RESPONSE FOR A CLASS OF
SPECIAL NONHOLONOMIC SYSTEMS
Shang Mei and Zhang Yi
Chinese Physics
Vol 10 No 1, January 2001

Nonlinar dynamics includes nonlinear optics, right? (just clarifying for me here, not a leading question.)
 
  • #35
My point about nonlinear dynamics in general is that it starts with a kind of fiction, which is that the system has "a state." Mathematically, if we have nonlinear dynamics, and start at a state, we have deterministic evolution that obeys sensitivity to initial conditions. However, if we don't actually have a state, but instead a collection of states, involving some uncertainty, then our initial uncertainty grows with time. Mathematically, we would still call that deterministic, because we have a bundle of deterministic trajectories that fan out and cover most or all of the accessible phase space. But physically, if we have an initial uncertainty that grows, we cannot call that deterministic evolution, because we cannot determine the outcome. Hence, if we cannot assert that the reality begins in "a state", we cannot say that its future is determined either. Rather, we see determinism for what it is-- a gray scale of varying degree of predictability, not an absolute state of how things evolve.

The Catch-22 of chaotic systems is we cannot demonstrate that the system does begin in a state other than a state of uncertainty, nothing else is actually demonstrable. It is purely a kind of misplaced faith in a mathematical model that tells us a macroscopic system actually has a state. Even quantum mechanically, a macro system is treated as a mixed state, which is of course not distinguishable from an uncertain state (and here I do not refer to the Heisenberg uncertainty of pure states, but the garden variety uncertainty of mixed states).
 
  • #36
Q_Goest said:
I’m not familiar with “holonomic” so I did a search:

It sounds like Pattee wants simply wants these macromolecules and genetics to have a stronger causal role in evolution but I'm not sure exactly what he's getting at. Perhaps you could start a new thread regarding Pattee and his contributions to philosophy and science.

Given that Pattee is an excellent cite for the systems view, this is certainly the right place to mention him :rolleyes:.

What he is talking about here is the symbol grounding issue - how non-holonomic constraints can actually arise in the natural world. Genetic information has to make itself separate from what it controls to be able to stand as a level of top-down control.

Sure, Baranger's paper is pretty basic, but it clearly makes the point that chaotic systems are deterministic given precise initial conditions, which is relevant to the OP.

That was hardly the thrust of the paper. And the more correct statement is that chaotic systems (such as the weather) can be modelled using the mathematical tools of deterministic chaos. This is different from the claim that the weather, or any other system is deterministically chaotic in the ontic sense.

So sure, the models behave a certain way - unfold mechanistically from their initial conditions. And it certainly resembles the observables of real world systems like the weather. But we also know that the models depend on unrealistic assumptions (such as a real world ability to measure initial conditions with complete accuracy).

From a philosophical view, you just can't jump from "looks like" to "is". Especially when you know there are ways that "it isn't".

I think it’s important also to separate out chaotic systems that are classical (and separable) in a functional sense, such as Benard cells, from systems that are functionally dependant on quantum scale interactions. Our present day paradigm for neuron interactions is that they are dependent on quantum scale interactions, so it seems to me one needs to address the issue of how one is to model these “non-holomonic” properties (classical or quantum mechanical influences) and whether or not such a separation should make any difference.

Pardon me? Did you just suggest that a QM basis to neural function was mainstream?

This is a good example of what confuses me about everything you say about this "systems approach". Are you suggesting these "top-down constraints" are somehow influencing and subordinating local causation? That is, are you suggesting that causes found on the local level (such as individual neuron interactions) are somehow being influenced by the top down constraints such that the neurons are influenced not only by local interactions, but also by some kind of overall, global configuration?

What I've said is that global constraints act top-down to restrict local degrees of freedom. So that in a strong sense does create what is there are the local scale. Of course the logic is interactive. It is a systems approach. So the now focused degrees of freedom that remain must in turn construct the global scale (that is making them).

This is how brains work. A neuron has many degrees of freedom. A particular neuron (in a baby's brain, or other unconstrained state) will fire to just about anything. But when a global state of attention prevails, the firing of that neuron becomes highly constrained. It becomes vigorous only in response to much more specific inputs. This is a very basic fact of electrophysiology studies.

So it is not just a theory, it is an observed fact. And yes, this is not the way machines work in general.

After rereading his paper, I’d say that he does in fact try to separate mental states (phenomenal states) from the underlying physical states as you say, but that mental states are epiphenomenal isn’t an unusual position for computationalists. Frank Jackson for example (Epiphenomenal Qualia) is a much cited paper that contends exactly that. So I’d say Farkus is in line with many philosophers on this account. He's suggesting mental states ARE physical states, and it is the mental properties that are "causally irrelevant" and an epiphenomenon (using his words) which I’d say is not unusual in the philosophical community.

I'm not holding up the Farkus paper as a shining example of the systems view. As I made plain, it was just what I happened to be reading that day and my remark was here is another reinventing the wheel.

But I think you are also reading your own beliefs into the words here.

Not that there aren’t logical problems with that approach. He states for example:

That says to me, he accepts that neurons only interact locally with others but we can also examine interactions at higher levels, those that are defined by large groups of neurons.

I don't see the issue. This is the standard view of hierarchy theory. Except you introduced the word only here to suggest Farkus meant that there are not also the local~global interactions that make the brain a system.

There are some areas in his paper I’m not too sure about. Take for example:

If he’s suggesting that this “higher level” is not determined by the interactions of the lower level (their interactions) in a deterministic way based only on the local interactions of neurons, then that sounds like strong downward causation which is clearly false. Certainly, there are people who would contend that something like that would be required for “free will” or any theory of mental causation. But I’m not sure that’s really what he wants.

What he says is that you have two things going on. The higher level has a long-run memory which causes what we might call its persistent state. Then it is also responding to the input coming from below, so its state is also "caused" by that.

If you dig out Stephen Grossberg's neural net papers, or Friston's more recent Bayseian brain papers, you will get a much more elegant view. Yet one with the same essential logic.

In another questionable section he states:

In the part emphasized, I’d say he’s trying to suggest that a person is somehow “immediately” and “simultaneously” affected by a “global state” on entering this classroom which I picture as being a zone of influence of some sort per Farkus. Were the same person to enter the same room and was blind and deaf, would these same “global states” immediately and simultaneously also affect that person? Sounds like Farkus wants his readers to believe that also, but that sounds too much like magic to me.

Surely he is just using an analogy and not suggesting that psi is involved :smile:. Why would his explicit claim that a person "senses" the atmosphere be read instead as a claim that a person who could not sense (being blind and deaf) would still sense?

All he is saying is that there is an ambient emotional state in the classroom - a generally shared state averaged across a connected set of people. Any newcomer then will respond to this globally constraining atmosphere.

I think this is a good lead into strong emergence and strong downward causation which, in one way or another, is necessary for mental causation and free will. The question really is, can the higher physical levels somehow subordinate the local interactions of neurons? And if so, how?

Excellent. But there are so many thousands of papers on the neuroscience of top-down attentional effects on neural receptive fields that it is hard to know where to start.

Here is a pop account with some useful illustrations.
http://www.sciencedaily.com/releases/2009/03/090325132326.htm

Here is a rather general review.
http://pbs.jhu.edu/bin/q/f/Yantis-CDPS-2008.pdf
 
Last edited by a moderator:
  • #37
nismaratwork said:
Nonlinar dynamics includes nonlinear optics, right? (just clarifying for me here, not a leading question.)

It appears so. I'm not sure, though, if it's always dynamical just because it's nonlinear optics, but there are plenty of dynamical examples of nonlinear optics' what I've seen are "simple" systems, not "complex" systems. But the Kerr effect (part of the origin of nonlinear dynamics) certainly appears like a nonlinear dynamical event to my mind.

As for complexity, we can model a complex network of "cells" (functional partitions in a material) as a system responding to the injected energy (the electromagnetic source) and talk about how the information propagates through the system and that's a complex dynamical system.
 
  • #38
Ken G said:
My point about nonlinear dynamics in general is that it starts with a kind of fiction, which is that the system has "a state." Mathematically, if we have nonlinear dynamics, and start at a state, we have deterministic evolution that obeys sensitivity to initial conditions. However, if we don't actually have a state, but instead a collection of states, involving some uncertainty, then our initial uncertainty grows with time. Mathematically, we would still call that deterministic, because we have a bundle of deterministic trajectories that fan out and cover most or all of the accessible phase space. But physically, if we have an initial uncertainty that grows, we cannot call that deterministic evolution, because we cannot determine the outcome. Hence, if we cannot assert that the reality begins in "a state", we cannot say that its future is determined either. Rather, we see determinism for what it is-- a gray scale of varying degree of predictability, not an absolute state of how things evolve.

The Catch-22 of chaotic systems is we cannot demonstrate that the system does begin in a state other than a state of uncertainty, nothing else is actually demonstrable. It is purely a kind of misplaced faith in a mathematical model that tells us a macroscopic system actually has a state. Even quantum mechanically, a macro system is treated as a mixed state, which is of course not distinguishable from an uncertain state (and here I do not refer to the Heisenberg uncertainty of pure states, but the garden variety uncertainty of mixed states).

Well, sure, don't confuse the model with reality; that's always a good thing to remember.
 
  • #39
Pythagorean said:
It appears so. I'm not sure, though, if it's always dynamical just because it's nonlinear optics, but there are plenty of dynamical examples of nonlinear optics' what I've seen are "simple" systems, not "complex" systems. But the Kerr effect (part of the origin of nonlinear dynamics) certainly appears like a nonlinear dynamical event to my mind.

As for complexity, we can model a complex network of "cells" (functional partitions in a material) as a system responding to the injected energy (the electromagnetic source) and talk about how the information propagates through the system and that's a complex dynamical system.

Hmmmm... I like it... any good reading you could recommend?
 
  • #40
Ken G said:
I think we are largely in agreement.
I think so, but for one subtil choice of wording and that's maybe not so trivial.

Ken G said:
no mathematical definition can tell you something about free will other than whether or not the mathematical definition is a useful replacement for free will (...) unless one adopts the weak meaning that anything that is usefully replaced by a determinist model is what we mean by "deterministic" when applied to a real thing.
This is well said and I largely agree, but for one word: to we what you call weak meaning is what I'd call physical or scientific meaning, and all other kind of meaning I'd call it either metaphysical, weak, or boring -depends on the mood ;-)

Ken G said:
all too easy to say, "according to the mathematician."
What is easy is not necessarly wrong :biggrin:
 
  • #41
Ken G said:
The Catch-22 of chaotic systems is we cannot demonstrate that the system does begin in a state other than a state of uncertainty
Do you see of deep difference with non chaotic systems?
 
  • #42
Lievo said:
This is well said and I largely agree, but for one word: to we what you call weak meaning is what I'd call physical or scientific meaning, and all other kind of meaning I'd call it either metaphysical, weak, or boring -depends on the mood ;-)
I can accept that, it's just that in my experience, that scientific meaning often leaks over into a kind of metaphysical meaning without even realizing it. It's like the way many people claim to take a "shut up and calculate" approach to physics-- until they don't.
 
  • #43
Lievo said:
Do you see of deep difference with non chaotic systems?
The difference is in how much it matters. For convergent trajectories, whether in reality there exists an exact initial state, or just a reasonably precise one, makes little difference when held to the lens of determinism. In chaotic systems, there's a huge difference in the practice of prediction, which is the operational meaning of determinism. So chaotic systems teach us, not that determinism leads to the butterfly effect, but that the butterfly effect challenges the very meaning of determinism.
 
  • #44
Ken G said:
My point about nonlinear dynamics in general is that it starts with a kind of fiction, which is that the system has "a state." Mathematically, if we have nonlinear dynamics, and start at a state, we have deterministic evolution that obeys sensitivity to initial conditions. However, if we don't actually have a state, but instead a collection of states, involving some uncertainty, then our initial uncertainty grows with time. Mathematically, we would still call that deterministic, because we have a bundle of deterministic trajectories that fan out and cover most or all of the accessible phase space. But physically, if we have an initial uncertainty that grows, we cannot call that deterministic evolution, because we cannot determine the outcome. Hence, if we cannot assert that the reality begins in "a state", we cannot say that its future is determined either. Rather, we see determinism for what it is-- a gray scale of varying degree of predictability, not an absolute state of how things evolve.

The Catch-22 of chaotic systems is we cannot demonstrate that the system does begin in a state other than a state of uncertainty, nothing else is actually demonstrable. It is purely a kind of misplaced faith in a mathematical model that tells us a macroscopic system actually has a state. Even quantum mechanically, a macro system is treated as a mixed state, which is of course not distinguishable from an uncertain state (and here I do not refer to the Heisenberg uncertainty of pure states, but the garden variety uncertainty of mixed states).

But isn't the collective whole of all the macroscopic systems actually one big deterministic one at the lowest level? Seems to me it becomes a matter of scope and not an inherent non determinism in the system as a whole. Also just for hypothesizing; wouldn't the universe during the big bang start out with almost no separated states and then evolve into many different states and thus create increased separation of systems, but ultimately be one big system?
 
  • #45
octelcogopod said:
But isn't the collective whole of all the macroscopic systems actually one big deterministic one at the lowest level? Seems to me it becomes a matter of scope and not an inherent non determinism in the system as a whole. Also just for hypothesizing; wouldn't the universe during the big bang start out with almost no separated states and then evolve into many different states and thus create increased separation of systems, but ultimately be one big system?

That would seem to depend on which Interpretation of QM one ascribes to.
 
  • #46
Ken G said:
scientific meaning often leaks over into a kind of metaphysical meaning without even realizing it
I couldn't agree more. I'm not sure it really makes a difference for physicists, but in "softer" sciences this has an impact. One example is uncautious naming. ABO gene is of course the gene for ABO phenotype, right? But once we name it this way, it's even becoming hard to remember that it may or may not be implicated in functions that has nothing to do with ABO phenotype. Not to speak that I've just mentioned "gene" as if this was something tangible rather than something we define after we find it's associated with some function.

Ken G said:
In chaotic systems, there's a huge difference in the practice of prediction, which is the operational meaning of determinism. So chaotic systems teach us, not that determinism leads to the butterfly effect, but that the butterfly effect challenges the very meaning of determinism.
I understand your point, but you're going to far by thinking butterfly effect really introduce something new. As you know butterfly effect means that small perturbations can quickly becomes large perturbation. But this is an experimental prediction in itself, and in itself it is testable!

Of course what you actually underline is that there exists "bad questions" that can't be answered such as what will be the future state of a chaotic system given a known present state?. Theory says this is a bad question and all you can do is to assign probabilites to different possible outcomes. Maybe you can interpret that as a challenge to determinism, but the fact is that the existence of "bad questions" is not new. For example if you want to know both position and speed of a particule with a better precision than what Heisenberg allows, this is easy to ask and impossible to answer. The theory says you can't know that, and this interdiction is an experimental prediction in itself.

So, sometime the prediction is that the prediction you want is not something reality allows you to know. Is it a challenge to determinism? Maybe, but certainly that's not specific to chaos. To me this just shows that language allows us to ask a larger set of questions than the set of all meaningfull questions.
 
  • #47
The claim that there might be some kind of randomness to nature, at any level, doesn't provide for free will. There has to be some form of strong downward causation which first requires the system to be nonseparable, and then it requires the whole of the system to intervene or subordinate the physical state of some part of that system. That might describe molecular systems but it doesn't describe the conventional view of neuron interactions.
 
  • #48
Q_Goest said:
There has to be some form of strong downward causation which first requires the system to be nonseparable, and then it requires the whole of the system to intervene or subordinate the physical state of some part of that system.
What is the evidences for thinking that this is a conventional view?
 
  • #49
Ken G said:
I can accept that, it's just that in my experience, that scientific meaning often leaks over into a kind of metaphysical meaning without even realizing it. It's like the way many people claim to take a "shut up and calculate" approach to physics-- until they don't.
While thinking at it, there is a very similar question in mathematics (Platonicist vs Formalist interpretation). I'd be interested in your position on this.
 
Last edited:
  • #50
Lievo said:
What is the evidences for thinking that this is a conventional view?
Good question. The computationalist crowd including Chalmers, Christly, Copeland, Endicot, many others, and those not in the computationalist crowd such as Maudlin, Bishop, Putnam etc... all argue over the issue of separability as discussed to some degree in my other thread regarding Zuboff's paper. If a system is separable, it's not holistic in any sense that allows for the emergence of a phenomenon like consciousness.

But a nonseparable system isn't enough to allow for that system to have a "choice" in the sense that the strongly emergent phenomena can have control over the constituents in any meaningful way. There's a lot of discussion in the past decade or two about emergence and how it applies to mental causation. If there is only local causation, then even if you have some kind of holistic, emergent phenomenon, it can have no influence over the constituent parts.
 
  • #51
Q_Goest said:
all argue over the issue of separability as discussed to some degree in my other thread regarding Zuboff's paper.
This may be evidence that the question was considered important, but of course this is not an evidence that the tentative solution you pointed is conventionnal view :wink:

Q_Goest said:
If there is only local causation, then even if you have some kind of holistic, emergent phenomenon, it can have no influence over the constituent parts.
Don't you think these http://en.wikipedia.org/wiki/Gun_(cellular_automaton)" challenge this view?
 
Last edited by a moderator:
  • #52
Lievo said:
While tinking at it, there is a very similar question in mathematics (Platonicist vs Formalist interpretation). I'd be interested in your position on this.
Yes, this would seem to be the key issue indeed. Personally, I am heavily swayed by the formalist approach. I feel that there is an important difference between logical truth, which is rigorous and syntactic, and semantic meaning, which is experiential and nonrigorous. Mathematics is primarily about the former, because of its reliance on proof, and physics is primarily about the latter, because of its reliance on experiment. Why the two find so much common ground must be viewed as the deepest mystery in the philosophy of either, and I have no answer for it either, other than it seems there is some rule that says what happens will be accessible to analysis, but the analysis will not be what happens.

What's more, I think that Godel's proof drove a permanent wedge between certainty and meaning, such that the two must forever be regarded as something different.

In the issue of free will, one must then ask if free will is housed primarily in the abstract realm of syntactic relationships, where lives concepts like determinism, or primarily in the experiential realm of what it feels like to be conscious, where lives human perception and experience. To me, it must be placed in the latter arena, which is why I think the whole issue of determinism vs. free will is a category error.
 
  • #53
Q_Goest said:
If there is only local causation, then even if you have some kind of holistic, emergent phenomenon, it can have no influence over the constituent parts.
But again one must ask, what is it that has only local causation, the brain, or the model of the brain that you are using for some specific purpose? It is important not to confuse the two, or you run into that Catch-22: you can never argue that the brain does not give rise to holistic emergent phenomena on the basis that you can successfully model the brain using local causation, because if local causation cannot lead to holistic emergent phenomena that influence the parts, then you may simply be building a model that cannot do what you are then using the model to try and argue the brain cannot do. In other words, you are telling us about the capabilities of models, not the capabilities of brains.
 
  • #54
Lievo said:
Don't you think these http://en.wikipedia.org/wiki/Gun_(cellular_automaton)" challenge this view?
Cellular automata were given as a prime example of "weak emergence" by http://www.google.com/search?hl=en&...k+emergence&aq=f&aqi=g1&aql=&oq=&safe=active". His paper is fairly popular, being cited over 200 times. Bottom line, cellular automata are weakly emergent. They are separable and have no "downward" causal affects on local elements.
 
Last edited by a moderator:
  • #55
Q_Goest said:
That might describe molecular systems but it doesn't describe the conventional view of neuron interactions.

But I just gave you references that show it IS the conventional view of neuron interactions.

The brain operates as an anticipatory machine. It predicts its next state globally and then reacts to the localised exceptions that turn up. Like Farkus's analogy of a classroom with an atmosphere, there is a running state that is the active context for what happens next.

This is completely explicit in neural network models such as Grossberg and Friston's.

So you are just wrong about the conventional view in neuroscience.

New Scientist did a good piece on Friston...
http://reverendbayes.wordpress.com/2008/05/29/bayesian-theory-in-new-scientist/
 
  • #56
Ken G said:
But again one must ask, what is it that has only local causation, the brain, or the model of the brain that you are using for some specific purpose? It is important not to confuse the two, or you run into that Catch-22: you can never argue that the brain does not give rise to holistic emergent phenomena on the basis that you can successfully model the brain using local causation, because if local causation cannot lead to holistic emergent phenomena that influence the parts, then you may simply be building a model that cannot do what you are then using the model to try and argue the brain cannot do. In other words, you are telling us about the capabilities of models, not the capabilities of brains.
But the brain IS modeled using typical FEA type computational programs. They can use the Hodgkin Huxley model or any other compartment method, of which there are a handful. FEA is an example of the philosophy of separable systems reduced to finite (linear) elements. Nevertheless, as I'd mentioned to aperion, FEA is just a simplificaiton of a full differential formulation as I've described in the library https://www.physicsforums.com/library.php?do=view_item&itemid=365". FEA and multiphysics software is a widespread example of the use of computations that functionally duplicate (classical) physical systems. Even highly dynamic ones such as Benard cells, wing flutter, aircraft crashing into buildings and even the brain are all modeled successfully using this approach. That isn't to say that FEA is a perfect duplication for a full differential formulation of every point in space. It's obviously not. However, the basic philosophical concept that leads us to FEA (ie: that all elements are in dynamic equilibrium at their boundaries) is the same basic philosophy that science and engineering use for brains and other classical physical systems.
 
Last edited by a moderator:
  • #57
Q_Goest said:
Cellular automata were given as a prime example of "weak emergence" by http://www.google.com/search?hl=en&...k+emergence&aq=f&aqi=g1&aql=&oq=&safe=active". His paper is fairly popular, being cited over 200 times. Bottom line, cellular automata are weakly emergent. They are separable and have no "downward" causal affects on local elements.

Agreed, the very definition of Turing Computation is that there is no top-down causation involved, only local or efficient cause.

The programmable computer is the perfect "machine". Its operations are designed to be deterministic.

And there are now so many computers and computer scientists in society that people are coming to believe reality is also "computational". They can no longer imagine other forms of more complex causality it seems.
 
Last edited by a moderator:
  • #58
Hi aperion,
apeiron said:
But I just gave you references that show it IS the conventional view of neuron interactions.
Hopefully my last post addresses this. Neuron interactions are philosophically treated using compartmental methods as described in my last post. If that's not what you feel is pertinant to the issue of separability, please point out specifically what you wish to address.
 
  • #59
apeiron said:
And there are now so many computers and computer scientists in society that people are coming to believe reality is also "computational". They can no longer imagine other forms of more complex causality it seems.
But it's not just "computer scientists". People are using that basic philosophy (computational/FEA) for everything now (at a classical scale). We can't live without it because it's so powerfully predictive.
 
  • #60
apeiron said:
Pardon me? Did you just suggest that a QM basis to neural function was mainstream?
Sorry! Thanks for pointing that out. I fixed it (edited it). Of course, QM is NOT the basis for neural function is what I meant.
 

Similar threads

  • · Replies 190 ·
7
Replies
190
Views
16K
  • · Replies 14 ·
Replies
14
Views
7K
  • · Replies 199 ·
7
Replies
199
Views
35K
Replies
14
Views
6K
Replies
11
Views
4K
  • · Replies 95 ·
4
Replies
95
Views
12K
  • · Replies 2 ·
Replies
2
Views
469
  • · Replies 60 ·
3
Replies
60
Views
9K
Replies
5
Views
3K
  • · Replies 27 ·
Replies
27
Views
7K