Does Neuroscience Challenge the Existence of Free Will?

  • Thread starter Thread starter Ken G
  • Start date Start date
  • Tags Tags
    Free will Neural
AI Thread Summary
The discussion centers on the implications of Benjamin Libet's research, which suggests that decisions occur in the brain before conscious awareness, raising questions about free will and determinism. Participants explore whether this indicates a conflict between determinism and free will, proposing that neurological processes may be deterministic while free will could exist in a non-physical realm. The conversation critiques the reductionist view that equates physical processes with determinism, arguing instead for a more nuanced understanding that includes complexity and chaos theory. The idea that conscious and unconscious processes are distinct is emphasized, with a call for a deeper exploration of how these processes interact in decision-making. The limitations of current neuroscience in fully understanding consciousness and free will are acknowledged, suggesting that a systems approach may be more effective than reductionist models. Overall, the debate highlights the complexity of free will, consciousness, and the deterministic nature of physical processes, advocating for a more integrated perspective that considers both neurological and philosophical dimensions.
  • #51
Q_Goest said:
all argue over the issue of separability as discussed to some degree in my other thread regarding Zuboff's paper.
This may be evidence that the question was considered important, but of course this is not an evidence that the tentative solution you pointed is conventionnal view :wink:

Q_Goest said:
If there is only local causation, then even if you have some kind of holistic, emergent phenomenon, it can have no influence over the constituent parts.
Don't you think these http://en.wikipedia.org/wiki/Gun_(cellular_automaton)" challenge this view?
 
Last edited by a moderator:
Physics news on Phys.org
  • #52
Lievo said:
While tinking at it, there is a very similar question in mathematics (Platonicist vs Formalist interpretation). I'd be interested in your position on this.
Yes, this would seem to be the key issue indeed. Personally, I am heavily swayed by the formalist approach. I feel that there is an important difference between logical truth, which is rigorous and syntactic, and semantic meaning, which is experiential and nonrigorous. Mathematics is primarily about the former, because of its reliance on proof, and physics is primarily about the latter, because of its reliance on experiment. Why the two find so much common ground must be viewed as the deepest mystery in the philosophy of either, and I have no answer for it either, other than it seems there is some rule that says what happens will be accessible to analysis, but the analysis will not be what happens.

What's more, I think that Godel's proof drove a permanent wedge between certainty and meaning, such that the two must forever be regarded as something different.

In the issue of free will, one must then ask if free will is housed primarily in the abstract realm of syntactic relationships, where lives concepts like determinism, or primarily in the experiential realm of what it feels like to be conscious, where lives human perception and experience. To me, it must be placed in the latter arena, which is why I think the whole issue of determinism vs. free will is a category error.
 
  • #53
Q_Goest said:
If there is only local causation, then even if you have some kind of holistic, emergent phenomenon, it can have no influence over the constituent parts.
But again one must ask, what is it that has only local causation, the brain, or the model of the brain that you are using for some specific purpose? It is important not to confuse the two, or you run into that Catch-22: you can never argue that the brain does not give rise to holistic emergent phenomena on the basis that you can successfully model the brain using local causation, because if local causation cannot lead to holistic emergent phenomena that influence the parts, then you may simply be building a model that cannot do what you are then using the model to try and argue the brain cannot do. In other words, you are telling us about the capabilities of models, not the capabilities of brains.
 
  • #54
Lievo said:
Don't you think these http://en.wikipedia.org/wiki/Gun_(cellular_automaton)" challenge this view?
Cellular automata were given as a prime example of "weak emergence" by http://www.google.com/search?hl=en&...k+emergence&aq=f&aqi=g1&aql=&oq=&safe=active". His paper is fairly popular, being cited over 200 times. Bottom line, cellular automata are weakly emergent. They are separable and have no "downward" causal affects on local elements.
 
Last edited by a moderator:
  • #55
Q_Goest said:
That might describe molecular systems but it doesn't describe the conventional view of neuron interactions.

But I just gave you references that show it IS the conventional view of neuron interactions.

The brain operates as an anticipatory machine. It predicts its next state globally and then reacts to the localised exceptions that turn up. Like Farkus's analogy of a classroom with an atmosphere, there is a running state that is the active context for what happens next.

This is completely explicit in neural network models such as Grossberg and Friston's.

So you are just wrong about the conventional view in neuroscience.

New Scientist did a good piece on Friston...
http://reverendbayes.wordpress.com/2008/05/29/bayesian-theory-in-new-scientist/
 
  • #56
Ken G said:
But again one must ask, what is it that has only local causation, the brain, or the model of the brain that you are using for some specific purpose? It is important not to confuse the two, or you run into that Catch-22: you can never argue that the brain does not give rise to holistic emergent phenomena on the basis that you can successfully model the brain using local causation, because if local causation cannot lead to holistic emergent phenomena that influence the parts, then you may simply be building a model that cannot do what you are then using the model to try and argue the brain cannot do. In other words, you are telling us about the capabilities of models, not the capabilities of brains.
But the brain IS modeled using typical FEA type computational programs. They can use the Hodgkin Huxley model or any other compartment method, of which there are a handful. FEA is an example of the philosophy of separable systems reduced to finite (linear) elements. Nevertheless, as I'd mentioned to aperion, FEA is just a simplificaiton of a full differential formulation as I've described in the library https://www.physicsforums.com/library.php?do=view_item&itemid=365". FEA and multiphysics software is a widespread example of the use of computations that functionally duplicate (classical) physical systems. Even highly dynamic ones such as Benard cells, wing flutter, aircraft crashing into buildings and even the brain are all modeled successfully using this approach. That isn't to say that FEA is a perfect duplication for a full differential formulation of every point in space. It's obviously not. However, the basic philosophical concept that leads us to FEA (ie: that all elements are in dynamic equilibrium at their boundaries) is the same basic philosophy that science and engineering use for brains and other classical physical systems.
 
Last edited by a moderator:
  • #57
Q_Goest said:
Cellular automata were given as a prime example of "weak emergence" by http://www.google.com/search?hl=en&...k+emergence&aq=f&aqi=g1&aql=&oq=&safe=active". His paper is fairly popular, being cited over 200 times. Bottom line, cellular automata are weakly emergent. They are separable and have no "downward" causal affects on local elements.

Agreed, the very definition of Turing Computation is that there is no top-down causation involved, only local or efficient cause.

The programmable computer is the perfect "machine". Its operations are designed to be deterministic.

And there are now so many computers and computer scientists in society that people are coming to believe reality is also "computational". They can no longer imagine other forms of more complex causality it seems.
 
Last edited by a moderator:
  • #58
Hi aperion,
apeiron said:
But I just gave you references that show it IS the conventional view of neuron interactions.
Hopefully my last post addresses this. Neuron interactions are philosophically treated using compartmental methods as described in my last post. If that's not what you feel is pertinant to the issue of separability, please point out specifically what you wish to address.
 
  • #59
apeiron said:
And there are now so many computers and computer scientists in society that people are coming to believe reality is also "computational". They can no longer imagine other forms of more complex causality it seems.
But it's not just "computer scientists". People are using that basic philosophy (computational/FEA) for everything now (at a classical scale). We can't live without it because it's so powerfully predictive.
 
  • #60
apeiron said:
Pardon me? Did you just suggest that a QM basis to neural function was mainstream?
Sorry! Thanks for pointing that out. I fixed it (edited it). Of course, QM is NOT the basis for neural function is what I meant.
 
  • #61
Q_Goest said:
But the brain IS modeled using typical FEA type computational programs.
And that is my point. You are saying "the brain is modeled X". Then you say "X cannot do Y." Then you say "thus Y cannot be important in understanding the brain." That is the Catch-22, if you build a model that cannot do something, you can't then blame the brain on this inability. The model may do many things the brain does, so may be a good model of a brain, but one cannot reason from the model to the thing, that's essentially the fallacy of reasoning by analogy.

FEA and multiphysics software is a widespread example of the use of computations that functionally duplicate (classical) physical systems.
Ah, now there's an interesting turn of phrase, "functionally duplicate." What does that mean? It sounds like it should mean "acts in the same way that I intended the model to succeed at acting", but you sound like you are using it to mean "does everything the system does." That is what you cannot show, you cannot make a model for a specific purpose, demonstrate the model succeeds at your purpose, and use that success to rule out every other purpose a different model might be intended to address-- which is pretty much just what you seem to be trying to do, if I understand you correctly.

Even highly dynamic ones such as Benard cells, wing flutter, aircraft crashing into buildings and even the brain are all modeled successfully using this approach.
Certainly, "modeled successfully." Now what does that mean? It means you accomplished your goals by the model, which is all very good, but it does not mean you can then turn around and use the model to obtain orthogonal information about what you are modeling. Just what constitutes "orthogonal information" is a very difficult issue, and I don't even know of a formal means to analyze how we could tell that, other than trial and error.
However, the basic philosophical concept that leads us to FEA (ie: that all elements are in dynamic equilibrium at their boundaries) is the same basic philosophy that science and engineering use for brains and other classical physical systems.
But so what?
 
  • #62
apeiron said:
The programmable computer is the perfect "machine". Its operations are designed to be deterministic.

And there are now so many computers and computer scientists in society that people are coming to believe reality is also "computational".
This is the perspective I am also in agreement with. We are seeing a failure to distinguish the goals of a model from the thing that is being modeled. I see this error in lots of places, it was made many times in the history of science. When Newton came up with a sensational model of motion, very unified and highly predictive, people said "so that's how reality works." They then reached all kinds of conclusions about what reality could and could not do, none of which were worth the paper they were written on when other models deposed Newton's. The point is, that is just reversed logic-- we don't use models to tell us what systems can do, we use systems to tell us what we are trying to get models to do, and we choose the latter. The choice of what we want to answer determines the model, the models shouldn't tell us what we should want to answer.
 
  • #63
Q_Goest said:
But it's not just "computer scientists". People are using that basic philosophy (computational/FEA) for everything now (at a classical scale). We can't live without it because it's so powerfully predictive.

Or powerfully seductive :rolleyes:.

I am not denying that the computational view is powerful. It is great for technology - for achieving control over the world. But it makes a myopic basis for philosophy as it leaves out the other sources of causality. And you won't find many biologists or neurologists who believe that it is the truth of their systems.

In your sphere of work, FEA is a completely adequate intellectual tool you say. But you are just wrong when you say it is the way neuroscientists think about brains.

Can't you see how FEA is set up to remove questions of downward constraint from the analysis?

In the real world, systems have pressures and temperatures due to some global history. There is a context that causes the equilibrium results. But then FEA comes into this active situation and throws an infinitely thin 2D measuring surface around a volume. A surface designed not to affect the dynamics in any fashion - any top-down fashion! The surface is designed only to record the local degrees of freedom, not change them. You have even specified that the measurements are classical, because you know there is a quantum limit to how thin and non-interactive, yet still measuring, such a surface can be.

So what you are claiming as an ontological result (reality is computational) is just simply a product of your epistemology (you are measuring only local causes). And the map is not the terrain.

You are claiming that FEA is useful (nay, powerfully predictive) in modelling brains. So can you supply references where neuroscientists have employed FEA to deliver brilliantly insightful results? Where are the papers that now back up your claim?
 
  • #64
Q_Goest said:
cellular automata are weakly emergent. They are separable and have no "downward" causal affects on local elements.
Don't you think these http://en.wikipedia.org/wiki/Gun_(cellular_automaton)" challenge this view?
 
Last edited by a moderator:
  • #65
Ken G said:
Personally, I am heavily swayed by the formalist approach. I feel that there is an important difference between logical truth, which is rigorous and syntactic, and semantic meaning, which is experiential and nonrigorous. Mathematics is primarily about the former, because of its reliance on proof, and physics is primarily about the latter, because of its reliance on experiment. Why the two find so much common ground must be viewed as the deepest mystery in the philosophy of either, and I have no answer for it either, other than it seems there is some rule that says what happens will be accessible to analysis, but the analysis will not be what happens.
Don't you think this a deep mystery only if one takes the formalist approach?
 
  • #66
Lievo said:
Don't you think this a deep mystery only if one takes the formalist approach?
Ah, interesting question. No doubt this is indeed the central basis of the Platonist idea that mathematical truths lie at the heart of reality, such that when we discover those truths, we are discovering reality. That was an easy stance to take in ancient times, but I would say that many of the discoveries of physics and math since then are leading us to see that stance as fundamentally naive. In physics, we had things like the discovery of general relativity, which calls into question just how "obvious" it is that the inertial path is a straight line in spacetime. Granted, that earlier view is replaced by an equally mathematical aesthetic, albeit a more abstract one, so one might say "see, it's still fundamentally mathematical, it's just different mathematics." To which I would respond, how many times are we allowed to say "OK, we were wrong before, but this time we have it right."

I would say that if we get a mathematical model, interpret it as the truth of the reality, and find time and again that it isn't, at some point we should just stop interpreting it as the truth of the reality. At which point, we begin to become amazed we did so well in the first place. In other words, I'm not sure which is more surprising, that our models are so accurate, or that a model can be so accurate and still not be true.

Then there's also the Godel proof, which is that in reasonably powerful and consistent mathematical systems, there have to be things that are true that cannot be proven from the finite axioms of that system (which means they cannot be proven at all, since any system for generating consistent axioms is itself a system of proof). This means there always has to be a permanent difference between what is true by meaning and what is provable by axiom. It may be a very esoteric and never-encountered difference, but it has to be there-- and I think that in fact the difference is not esoteric and is encountered constantly, which is why physics keeps changing.
 
  • #67
Ken G said:
Ah, now there's an interesting turn of phrase, "functionally duplicate." What does that mean? It sounds like it should mean "acts in the same way that I intended the model to succeed at acting", but you sound like you are using it to mean "does everything the system does."

In this regard, those who see deep philosophical truths in deterministic chaos need to remember the "dirty little secret" of truncation error in computational simulations of chaotic trajectories.

The shadowing lemma says the trajectories may stay "sufficiently close" for many practical purposes. So they can be functionally duplicate. But this is not the same as claming an ontological level duplication. The model is never actually replicating the system in the strict sense of philosophical determinism. Indeed, we know that it definitely isn't. Shadowing says only that the probability is high that the simulation will remain the vincinity of what it pretends to duplicate!

Here is an intro text on shadowing.

http://www.cmp.caltech.edu/~mcc/Chaos_Course/Lesson26/Shadowing.pdf
 
  • #68
The truncaton error is not significant. If I run the same deterministic system twice, I won't get different results because of truncation error. The two systems will have exactly the same fate.

Anyway, as Kens been saying. The computer models are tools for analysis, not deep philosophical statements.
 
  • #69
apeiron said:
Shadowing says only that the probability is high that the simulation will remain the vincinity of what it pretends to duplicate!

Here is an intro text on shadowing.

http://www.cmp.caltech.edu/~mcc/Chaos_Course/Lesson26/Shadowing.pdf
That is interesting. I interpret that as saying a computed trajectory can be viewed as an approximation of some true trajectory that the deterministic mathematical system does support, so it's not a complete fiction of the computation, but it is not necessarily the trajectory that every real system in the neighborhood would follow-- even if the mathematical model were a true rendition of reality. So this supports your point that we know deterministic models of chaotic systems cannot be the whole truth, even if we are inclined to believe they are close to the truth. The situation is even worse if we are inclined to be skeptical that there is any such thing as a "true trajectory" of a physically real system, let alone that a model can reproduce it consistently for long times.
 
  • #70
Pythagorean said:
The truncaton error is not significant. If I run the same deterministic system twice, I won't get different results because of truncation error. The two systems will have exactly the same fate.
That doesn't mean the truncation error is never significant, it means the truncation error is consistent, which is something different-- it might be consistently significant!
Anyway, as Kens been saying. The computer models are tools for analysis, not deep philosophical statements.
That is the key issue, yes.
 
  • #71
Ken G said:
Ah, interesting question. No doubt this is indeed the central basis of the Platonist idea that mathematical truths lie at the heart of reality, such that when we discover those truths, we are discovering reality.

Plato was really more concerned with the form of things than mathematical truth as such. But maths is the science of patterns, so there is a large overlap.

The modern way of viewing this would be that the realm of form (as the counterpart to the realm of substance) is about self-organisation. Or global constraints (yes, sigh). It is about all the self-consistent patterns that can exist. And so it is all about symmetry principles. Changes which are not a change.

Maths looks like reality as maths creates a library of possible pattern descriptions and reality is a self-organising pattern.

Wolfram's cellular automata project was so funny because he took a very simple pattern generator and then just exhaustively generated every possible pattern to see which ones resembled reality.

But in general, this is what maths does. It defines some broad axiomatic truths (it creates some global constraints) and then generates all the possible patterns made possible by those constraints.

The problem then is that the axioms can't be known to be true (even if the consequences that flow from the axioms are deterministic, or at least taken to be proven step by step).

So the forms are real. But the maths is the modelling of forms.

However, maths can also hope to model self-organisation itself. Which is where chaos theory for example comes in as a hugely successful way of modelling "random nature".

Key constraints (on linearity for instance) are relaxed. The system is then allowed to organise its own constraints as part of what it does.

This is why maths can make historical progress. The early work was overloaded with presumed constraints (such as a presumption space was flat, it had just three dimensions, time was a separate clock, etc). Too much was taken as globally axiomatic when it was only specific to a local system.

But maths has just kept relaxing the constraints so as to arrive at the fewest and simplest axioms. And then more recently started to do the other part of the job - invent models of constraint development. Models of global self-organisation.

So first strip the constraints out, then find the way they can build themselves back in. Once maths reaches this level of modelling, it really will be powerfully close to the truth of a self-organising reality.
 
  • #72
Lievo said:
Don't you think these http://en.wikipedia.org/wiki/Gun_(cellular_automaton)" challenge this view?
I responded to this one above.
 
Last edited by a moderator:
  • #73
Pythagorean said:
The truncaton error is not significant. If I run the same deterministic system twice, I won't get different results because of truncation error. The two systems will have exactly the same fate.

Well you agree that the map is not the terrain.

Of course the map, the computer simulation MUST keep repeating its truncation errors at each iteration. The computer has been designed to be able to do just this...without error :biggrin:.

But the point is the reality is doing something else. There are no truncation errors in its "computation" of a chaotic trajectory.

The simulation actually starts on a different trajectory with every iteration because a truncation error changes what would have been the exact next state every time. The shadowing lemma just argues this is not a significant problem in the pragmatic sense. However in philosophical arguments demanding absolutes, such as absolute determinism, truncation error is indeed "the dirty secret".
 
  • #74
apeiron said:
So first strip the constraints out, then find the way they can build themselves back in. Once maths reaches this level of modelling, it really will be powerfully close to the truth of a self-organising reality.
I think that approach is probably a description of the next-level of sophistication of models, rather than what makes the models the reality. We must always avoid the mistake of thinking that we are one step away from making our models real! People have pretty much always thought that. But I agree with the sentiment you are expressing that seeking constraints was too brute-force of an approach, and symmetries are in some sense the opposite of constraints-- they are patterns that emerge from necessity more so than fiat, because all math is a search for order, and the first step for finding order is not the search for how to strong-arm the behavior into what you want, but rather how to recognize all the things you imagined were different but really weren't. That's how you throw out complexity without throwing out the "business end" of what you want to be left with-- and only after you've done that should you start looking for the rules of behavior.

It's an interesting insight that sometimes the rules require re-introducing the complexity that had been removed, like the way symmetry-breaking gives rise to interactions, played out against the backdrop of approximately unbroken symmetries. So I'm sympathetic to your perspective that the next place models need to go is into the realm of self-organization and top-down contextual support for constraints, rather than trying to intuit the bottom-up local constraints that will magically make it all work. But I still think that when you have done that, you will have just another set of models-- useful for doing what you wanted them to do, and important expressly because they represent a new kind of want.
 
  • #75
Q_Goest said:
I responded to this one above.
What was your response?
 
  • #76
lievo said:
what was your response?

.
..:
 
  • #77
Ken G said:
one might say "see, it's still fundamentally mathematical, it's just different mathematics." To which I would respond, how many times are we allowed to say "OK, we were wrong before, but this time we have it right."
Ah I'm surprised you miss here an an important distinction to make between mathematical systems and mathematical models. Newton to relativity is not a switch between systems, it's a switch between models, and different models is not different mathematics, it's different... well models. The formalist-platonic match is about if there exists a single system that is at the root of reality or if systems of axioms are human choice that may have little to do physical reality. Up to now I'm not aware of any scientific theory that is not a model of Peano arithmetics (which is a system), so it's not different mathematics, only different models of the same mathematic.
 
  • #78
Ken G said:
I think that approach is probably a description of the next-level of sophistication of models, rather than what makes the models the reality. We must always avoid the mistake of thinking that we are one step away from making our models real!

Oh I agree completely. The map will still not be the terrain. But still, I wanted to highlight what real progress would look like.

One of the complaints about modern maths is that it has departed reality modelling and now spends too much time inventing fictional realities. And it is true. A lot of what would be studied and published is just creating an unnecessary clutter of journal papers.

But this masks a general progress in relaxing the constraints. For instance, projects like category theory. And it also masks the new thing of modelling self-organisation.

and symmetries are in some sense the opposite of constraints

I would say that symmetries are what you get having stripped out all constraints. And so symmetries become a fundamental ground on which you can start modelling the self-organising development of constraints.

This is precisely the metaphysics of CS Peirce, what he was talking about with his process of abduction, his machinery of semiosis, his logic of vagueness.

It's an interesting insight that sometimes the rules require re-introducing the complexity that had been removed, like the way symmetry-breaking gives rise to interactions, played out against the backdrop of approximately unbroken symmetries. So I'm sympathetic to your perspective that the next place models need to go is into the realm of self-organization and top-down contextual support for constraints, rather than trying to intuit the bottom-up local constraints that will magically make it all work. But I still think that when you have done that, you will have just another set of models-- useful for doing what you wanted them to do, and important expressly because they represent a new kind of want.

Yes, the map is not the terrain. No argument.

But as you say, the big switch that so many are struggling to take is the shift from the local view to the global one.

People are still trying to bury the global organisation in the local interactions. They still want to do the old-school atomism that worked so well at the birth of modern science - the atomism that scooped up all the low hanging fruit and then pretended the orchard was now bare.

Atomism has become nothing less than a scientific faith, a religion. There is only one answer when it comes to modelling, and all other heretical views must burn at the stake. :-p

That is what makes PF so enjoyable. It is a natural bastion of reductionist fundamentalism. On other academic forums where I am just surrounded by biologists and neuroscientists, the debates are only about hair-splitting differences in shades of opinion. Like should we call this self-organising view of thermodynamics that has emerged, the maximum energy dispersion principle (MEDP) or the maximum entropy production principle (MEPP)?
 
  • #79
Lievo said:
Ah I'm surprised you miss here an an important distinction to make between mathematical systems and mathematical models.
But all I'm saying is that one cannot make the argument that reality is fundamentally mathematical on the basis of the success of current models, without the argument falling under the weight of all the past models that were replaced by current models. One is essentially claiming that there is something special about the present, which seems like an illusory claim.
The formalist-platonic match is about if there exists a single system that is at the root of reality or if systems of axioms are human choice that may have little to do physical reality. Up to now I'm not aware of any scientific theory that is not a model of Peano arithmetics (which is a system), so it's not different mathematics, only different models of the same mathematic.
It is true that the core system of mathematics, a la Whitehead and Russell or some such thing, has not really changed, but that system by itself (and now if we wish to use the term "system", we will need to distinguish mathematical systems from physical systems) is never used to make the argument that reality is fundamentally mathematical. That mathematical system is based on axioms, and the axioms have never been found to be inconsistent, that is true-- but a system of finite axioms is a far cry from reality. To make contact with reality, the system must be injected into a model that invokes postulates, by which I mean things we have no a priori idea are true or not (whereas axioms must seem true), and that's essentially what physics is. So when someone claims reality is mathematical, it is not because of the success of Peano axioms or some such, it is because of the success of those physical postulates. And those are what keep changing-- the very success the argument is shouldered by keeps letting the argument down.
 
  • #80
octelcogopod said:
But isn't the collective whole of all the macroscopic systems actually one big deterministic one at the lowest level?
Do you mean is it really deterministic, i.e., determined, or do you mean can we find value in modeling it with deterministic mathematics? The latter is certainly true, but the former claim is the one I am calling into doubt.
Also just for hypothesizing; wouldn't the universe during the big bang start out with almost no separated states and then evolve into many different states and thus create increased separation of systems, but ultimately be one big system?
It sounds like you are asking if there is such a thing as "the state of the universe as a whole." I think that's an important question, and people do tend to talk about such a state, but no one has ever demonstrated the concept really makes any sense. For one thing, it leads us to the many-worlds interpretation of quantum mechanics, which to some seems quite absurd.
 
  • #81
Hi Lievo,
Lievo said:
What was your response?
See post 54, page 4.
 
  • #82
Pythagorean said:
Anyway, as Kens been saying. The computer models are tools for analysis, not deep philosophical statements.
apeiron said:
But it makes a myopic basis for philosophy as it leaves out the other sources of causality.
This is a good discussion, but I disagree. There IS a deep philosophical reason why FEA is done the way it is. It isn't just so we can model something, and it DOES take into account every known source of causality. Benard cells for example, are highly dynamic, highly nonlinear phenomena (that have been used as a potential example of downward causation) that is http://wn.com/Rayleigh-Bénard" only because it can take into account ALL the causality and ALL the philosophy of why fluids behave as they do. There's no need to suggest some kind of downward causation when local interactions are sufficient to bring about all the phenomena we see in Benard cells.

That same philosophy is used for modeling neurons in the brain, not just because it works, but because the philosophy of why the brain and neurons are doing what they do matches what science believes is actually happening. In short, the basic philosophy FEA analysis adheres to is that of first breaking down a complex structure, and then calculating the conditions on the individual elements such that they are in dynamic equilibrium both with the overall boundary conditions acting on the system and also with each other.

Now I’m not a neuroscience expert by any stretch. But it’s clear that neuroscience is also dedicated to using the same computational approach engineers use. They use compartment models just as FEA uses elements. Here's a picture of what you might see on the computer screen when you create these models:
[PLAIN]http://www-users.mat.umk.pl/~susur/1b.jpg
Again, this isn’t done simply because they want to model something. It’s done because that’s how they believe the brain works. These models include all the causality and all the philosophy of what’s going on and their models are approaching the point where they are matching experiment. Two common software programs I’ve heard of are “http://www.neuron.yale.edu/neuron/" (FEA) on a grand scale.

Here’s a few examples:
Herz, “Modeling Single-Neuron Dynamics and Computations: A Balance of Detail and Abstraction” (Science 314, 80 (2006))
Herz discusses 5 different “compartment models” and how and why one might use one compared to another. Detailed compartment models are listed in Roman numerals I to V. Model III is the common Hodgkin-Huxley model. Herz concludes with:
These developments show that the divide between experiment and theory is disappearing. There is also a change in attitude reflected by various international initiatives: More and more experimentalists are willing to share their raw data with modelers. Many modelers, in turn, make their computer codes available. Both movements will play a key role in solving the many open questions of neural dynamics and information processing – from single cells to the entire brain.

Another typical paper by Destexhe, “Dentritic Low-Threshold Calcium Currents in Thalamic Relay Cells” J. of Neuroscience, May 15, ’98, describes work being done to compare dissociated neurons (in petri dishes) to neurons in vivo. I thought this one was interesting because it is explicitly stated the presumption that neurons can be made to function in the petri dish exactly as they do in vivo. He also uses the computer code NEURON as mentioned above.

There’s literally tons of papers out there that show how neurons are made to act exactly as local causal physics would have them (ie: weak emergence). Yes, neurons are highly nonlinear and yes to some degree they exhibit stoichastic behavior - to the experimentalist; which begs the question of whether or not they truly are probabalistic or are there ‘hidden variables’ so to speak, that we simply haven’t nailed down? Even if we find that neurons exhibit truly probabalistic behaviors such as for example, radioactive decay exhibits, is that single feature of a neuron truly going to lead us to finding “free will”?
 
Last edited by a moderator:
  • #83
Q_Goest said:
There IS a deep philosophical reason why FEA is done the way it is. It isn't just so we can model something, and it DOES take into account every known source of causality.
I agree this is a good discussion, and I would say that what your posts are primarily doing is making very explicit the good things that FEA models do. There is no dispute these models are powerful for what they are powerful at, the overarching question is, can they be used to model free will, or are they in some sense leaving that behind and then reaching conclusions about what it isn't.
That same philosophy is used for modeling neurons in the brain, not just because it works, but because the philosophy of why the brain and neurons are doing what they do matches what science believes is actually happening.
This is the place where I would say your argument keeps hitting a wall. Science has no business believing anything about what is actually happening, that is an error in doing science that has bit us over and over. It's simply not science's job to create a belief system, it's science's job to create models that reach certain goals. So science starts by identifying its goals, and then seeking to achieve them. There's simply no step in the process that says "believe my model is the reality." The whole purpose of a model is to replace reality with something that fits in our head, and makes useful predictions. But if the nature of what the model is trying to treat changes, if the questions we are trying to address change, then we don't take a model that is built to do something different and reach conclusions in the new regime, any more than we try to drive our car into the pond and use it like a submarine. It just wasn't built to do that, so we need to see evidence that FEA methods treat free will, not that they are successful at other things so must "have it right". We don't take our models and tell reality how to act, we take how reality acts and see if we can model it. How does one model free will, and where is the evidence that an FEA approach is the way to do it?

In short, the basic philosophy FEA analysis adheres to is that of first breaking down a complex structure, and then calculating the conditions on the individual elements such that they are in dynamic equilibrium both with the overall boundary conditions acting on the system and also with each other.
The whole is the sum of its parts. Yes, that does seem to be the FEA approach-- so what we get from it is, whatever one gets from that approach, and what we don't get is whatever one doesn't get from that approach. We cannot use validity in some areas as evidence of uniform validity. It's not that FEA is doing something demonstrably wrong, it is that it might just not be doing something at all.
Even if we find that neurons exhibit truly probabalistic behaviors such as for example, radioactive decay exhibits, is that single feature of a neuron truly going to lead us to finding “free will”?
I agree that random behavior is no better than deterministic behavior in getting free will, we are saying we don't think that either mode of operation is going to get to the heart of it. It's just something different from either deterministic or random behavior of components, it's something where the whole cannot be expected to emerge from algorithmic behavior of individual parts. The algorithms of a successful model will need to be more holistic-- if the entire issue is algorithmic in the first place (which I find far from obvious, given the problems with using our brains to try and understand our brains).
 
  • #84
Ken G, Apeiron, Q_Goest, Lievo:... I'm out of my depth philosophically, but I just want to say that it's a real pleasure reading this interchange.


@Q_Goest: That's a very useful model... how did you get into this without a primary interest in neuroscience?

This is all more (pleasantly) than I expected from Philosophy... even here.
 
  • #85
Q_Goest said:
There’s literally tons of papers out there that show how neurons are made to act exactly as local causal physics would have them (ie: weak emergence). Yes, neurons are highly nonlinear and yes to some degree they exhibit stoichastic behavior - to the experimentalist; which begs the question of whether or not they truly are probabalistic or are there ‘hidden variables’ so to speak, that we simply haven’t nailed down? Even if we find that neurons exhibit truly probabalistic behaviors such as for example, radioactive decay exhibits, is that single feature of a neuron truly going to lead us to finding “free will”?

I ask about brains and you talk about neurons! So the scale factor is off by about 11 orders of magnitude. :rolleyes:

But anyway, your presumption here is that neurons make brains, whereas I am arguing that brains also make neurons. The system shapes the identity of its components.

One view says causality is solely bottom-up - constructed from atoms. The other says two kinds of causality act synergistically. There is also the top-down constraints that shapes the atoms. So now both the atoms and the global scale emerge jointly in a process of differentiaton~integration.

Absolutely everything is "just emergent".

The FEA approach you describe only works because the global constraints are taken as already in existence and so axiomatic. What does not change does not need to be mentioned when modelling.

So take a benard cell. An entropic gradient is presumed. The source and the sink are just there. The model does not seek to explain how this state of affairs developed, just what then happens as a consequence.

Order then arises at a critical temperature - the famous hexagonal cells. Local thermal jostlings magically become entrained in a global scale coherent motion.

Now these global scale cells do in fact exert a downward causal effect. As just said, they entrain the destinies of individual molecules of oil. This is what a dissipative structure is all about. Global constraints (the order of the flow) acting to reduce the local degrees of freedom (the random thermal jostle of the molecules become suddenly far less random, far more determined).

So benard cells are frequently cited as an example of self-organisation due to the "mysterious" development of global order.

There are other features we could remark on, like the fact that the whorls are hexagonal (roughly) rather than circular. The fact that the activity is confined (globally constrained) reduces even the "local degrees of freedom" of these benard cells. Circular vortexes are the unconstrained variety. Hexagonal ones are ones with extra global constraints enforced by a packing density.

Note too that the macro-order that the benard cell is so often used to illustrate is a highly delicate state. Turn the heat up a little and you have soon the usual transition to chaos proper - whorls of turbulence over all scales, and no more pretty hexagonal cells.

In a natural state, a dissipative structure would arrange itself to maximise entropy through-put. The benard cell is a system that some experimenter with a finger on the bunsen burner keeps delicately poised at some chosen stage on the way to chaos.

So the benard cell is both a beautiful demonstration of self-organising order, and a beautifully contrived one. All sorts of global constraints are needed to create the observed cellular pattern, and some of them (like a precisely controlled temperature) are wildly unnatural. In nature, a system would develop it global constraints rapidly and irreversibly until entropy throughput is maximised (as universality describes). So the benard cell demonstration depends on frustrating that natural self-organisation of global constraints.

So again, the challenge I made was find me papers on brain organisation which do not rely on top-down causality (in interaction with bottom-up causality).

Studying neurons with the kind of FEA philosophy you are talking about is still useful because it allows us to understand something about neurons. Reductionism always has some payback. But philosophically, you won't be able to construct conscious brains from robotic neurons. Thinking about the brain in such an atomistic fashion will ensure you will never see the larger picture on the causality.
 
  • #86
I was looking to see if I had something to add and found I did not answer to the following. Sorry being late.

apeiron said:
But BPP assumes determinism (the global constraints are taken to be eternal, definite rather than indefinite or themselves dynamic). So no surprise that the results are pseudo-random and Ockham's razor would see you wanting to lop off ontic randomness.
Yes (BPP is computable so determinist in my view), but I'm not so sure there is no surprise in P=BPP. Maybe we should wait for a formal proof that the equality truly holds.

PS:
apeiron said:
I ask about brains and you talk about neurons! So the scale factor is off by about 11 orders of magnitude. :rolleyes:
[STRIKE]12, actually.[/STRIKE]
EDIT forget that :blushing:
 
Last edited:
  • #87
Ken G said:
I agree that random behavior is no better than deterministic behavior in getting free will, we are saying we don't think that either mode of operation is going to get to the heart of it. It's just something different from either deterministic or random behavior of components, it's something where the whole cannot be expected to emerge from algorithmic behavior of individual parts. The algorithms of a successful model will need to be more holistic-- if the entire issue is algorithmic in the first place (which I find far from obvious, given the problems with using our brains to try and understand our brains).

To reconnect to the OP, I would restate that the top-down systems argument is that the determined and the random are what get shaped up down at the local level due to the self-organisation of global constraints.

So down at the ground level, there are just degrees of freedom. As many as you please. Then from on high comes the focusing action. The degrees of freedom are constrained in varying degree.

When highly constrained, there is almost no choice but to act in some specified direction. The outcome looks deterministic (and can be modeled as deterministic in the language of atomism/mechanicalism).

When weakly constrained, there is lots of choice and the direction of action becomes "random". Things go off in all the directions that have been permitted.

An analogy might be piston engine. The gas explosion has many degrees of freedom when unconstrained. But confined to a chamber with a piston, the result becomes deterministic. The action has a single available direction.

Freewill, voluntary action, selective attention, consciousness, etc, are all words that recognise that our reality comes with a global or systems level of downwards acting causality. We can organise our personal degrees of freedom in a way that meets evolutionary needs. My hand could wander off in any direction. But I can (via developmental learning) constrain it go off in useful directions.

There is no battle going on in my brain between the dark forces of determinism and randomness. I am not a slave to my microcauses. Instead, just as I experience it, I can exert a global constraint on my microscale that delivers either a highly constrained or weakly constrained response.

I can focus to make sure I pick up that cup of coffee. Or I can relax and defocus to allow creative associations to flow. There is a me in charge (even if I got there due to the top-down shaping force of socio-cultural evolution :devil:).

It is only when I swing a golf club that I discover, well, there is also some bottom-up sources of uncontrolled error. There is some damn neural noise in the system. The micro-causes still get a (constrained) say!
 
  • #88
Lievo said:
[STRIKE]12, actually.[/STRIKE]
EDIT forget that :blushing:

Hmm, 10^11 neurons last time I counted, and 10^15 synaptic connections :-p.

But in actual size scale, just a 10^5 difference between neurons and brains. So an exaggeration there.
 
  • #89
Ken, apeiron:

Yes, we agree the map is not the territory. But the point about error truncation is that it's not particularly different than linear systems. It still happens whenever you stray from integers. This is a computational problem, not a theoretical problem.

That it's chaotic means you have to test your systems robustness which means varying parameters and initial conditions over a wide range of values so that you can say "this behavior occurs in this system over a wide range of conditions". It really has nothing to do with the error truncation, and only the chaotic nature of the system itself. We really have quite sophisticated methods for handling that technical detail (that's the problem of digital signal processing, not theoretical science; I was always concerned about this coming into the research, but I recognize the difference now after hands-on experience formulating robustness tests. In fact, I so doubted my advisor's assurance at the time that I strenuously tested the error tolerance of my system to see how it changed the results, which is computationally expensive and revealed that her intuition was correct).

What's being studied in complex systems is general behavior (the fixed-points of the system: stable, unstable, foci, saddle-points, limit cycles, etc) and the bifurcations (major qualitative changes in the system as a function of quantitative changes). Whether a particle went a little to the left or a little to the right is not significant to the types of statements we make about the system (which are not reductionist, deterministic statements, but general behavioral analysis. The plus side is that they can be reduced to physiologically meaningful statements that experimental biologists can test in the lab (as was done with the cell-cycle; a case where theoretical chaos and reductionist biology complemented each other, despite their apparent inconsistencies).
 
  • #90
Hi Ken,
Ken G said:
There is no dispute these models are powerful for what they are powerful at, the overarching question is, can they be used to model free will, or are they in some sense leaving that behind and then reaching conclusions about what it isn't. This is the place where I would say your argument keeps hitting a wall.
In the course of this thread, I am making an effort (apparently successfully) not to provide my opinion on free will. I think there’s a common desire for humans to believe that our feelings and emotions (our phenomenal experiences) actually make a difference. We want to believe we are not automatons, that we have “free will” and our experiences really matter. We intuitively feel that there is something different about our experience of the world and that of an automaton, and therefore, the computational paradigm must somehow be wrong.

The paper by Farkas that I discussed previously is (partly) about how mental causation is epiphenomenal. Frank Jackson wrote a highly cited paper suggesting the same thing called “Epiphenomenal Qualia”. However, epiphenomenal qualia and theories behind it, run into a very serious logical problem, one that seems to point to the simple fact that mental causation must be a fact, that phenomenal experiences must account for something, and they must somehow make a difference in the physical world. That logical problem is called the knowledge paradox. In brief, if phenomenal events are epiphenomenal and mental causation is false, then how are we able to say anything about them? That is, how can we say that we are experiencing anything if we aren’t actually reporting what we are experiencing? In other words, if we're saying we are having a phenomenal experience, and that experience is epiphenomenal, meaning that the phenomenal experience CAN NOT INFLUENCE ANYTHING PHYSICAL, then how is it we are acknowledging this experience? How is it we are reporting what we experience if not for the simple fact that the physical words coming out of our mouth are causally influenced by the phenomenal experience? If phenomenal experiences are really not causing anything, then they can't enter the causal chain and they can't cause us to report in any way/shape/form that experience. They are not phenomena that can be reported unless they somehow influence a person to reliably report the phenomena.

The solution to that question as provided by Farkas or Jackson or Jaegwon Kim or anyone else that’s tried to respond to it - is that there is a 1 to 1 relationship between the physical “supervenience basis” (ie: the neurons) and the phenomenal experience. What they’re saying is that the experience of some event, such as the experience of seeing a red fire truck, hearing the sirens, and smelling the diesel fumes as it passes, is “the same as” the physical interaction of all those neurons on which the experience relies. So yes, that experience relies upon the interaction of neurons, and we might say that we had an experience of the sight, sound and smell of the fire truck as it passed. But if this experience is truly epiphenomenal then we have no ability to report it. We have no ability to say “I experienced the site of red, the sound of “waaaaaaa” and the smell of diesel exhaust. It wasn’t just a physical thing, I actually experienced it.”

Why don’t we have an ability to report the experience? Because the experience is epiphenomenal, meaning that what these people are really wanting us to believe is that I’m not saying I saw red, and I’m not saying I heard the siren, and I'm not telling you about the smell of diesel fuel. Those expressions came out of my mouth merely because of the physical neuron interactions and because there is a 1 to 1 correlation between the phenomenal experience and the physical interactions. But it wasn’t the phenomenal experience that made me say anything, it was the physical interactions. So in short, there is no room to “reliably report” any phenomenal experience. The fact that I actually was able to behave as if I experienced those things and report my experience, is only due to serendipity. My report that there was an actual correlation between the phenomenal state and the physical state is utterly fortuitous. This line of reasoning was brought out by Shoemaker, “Functionalism and Qualia” and also Rosenberg, “A Place for Consciousness”.

I personally don’t believe that every time I say I’ve experienced something or act as if I did, the actual reason I’m saying and acting in such a manor is that there just happens to be a fortuitous correlation between those phenomenal experiences and the physical supervenience basis. That said, I still accept in totality, the philosophical basis that our science relies on. FEA is a correct philosophy, but it is not inconsistent with mental causation.
 
  • #91
Q_Goest said:
There’s literally tons of papers out there that show how neurons are made to act exactly as local causal physics would have them (ie: weak emergence). Yes, neurons are highly nonlinear and yes to some degree they exhibit stoichastic behavior - to the experimentalist; which begs the question of whether or not they truly are probabalistic or are there ‘hidden variables’ so to speak, that we simply haven’t nailed down? Even if we find that neurons exhibit truly probabalistic behaviors such as for example, radioactive decay exhibits, is that single feature of a neuron truly going to lead us to finding “free will”?

Well, first, I think we all agree the notion of "free will" is already construed, don't we?

If we have any willpower, it's severely limited. Besides being confined by physical laws, as you probably know, there are a number of experiments that can show, at the least, that short-term free will is questionable. We can mark a lot of correlations between education, social class, and crime. We can find genes that link to behavior. If there's any free will in a single individual, it's a very weak force.

I don't see what "downward causation" really means. Physically, it doesn't seem any different from constraints. Constraints can be reduced to particle interactions themselves. And even if those constraints are holonomic, they can still be modeled as function of more degrees of freedom (though stochastic models are sometimes more successful). At some point though, you have to talk about what the initial conditions are for those degrees of freedom and how they arose. Once you model the whole universe, that becomes paradoxical... do you just weave them back into your system so you have one big closed loop? If matter and energy are to be conserved, it would appear so; and that would relieve the paradox (but I'm obviously speculating, here).

To me, "downward causation" seems to be an anthropomorphic desire to inject the subjective human quality of "willpower" into interpretations of global physical events. The only thing, to me, that makes global events significant, is the observer that sees several small events occurring at the same time and makes up a story so that it's all one big picture; that way the observer can have a stable world view. Evolutionary, of course, this makes sense, because it helps us (though bayesian learning) to instigate behavior towards food and shelter and away from danger.

Do I deny that, for instance, language and society influence the personality of an individual? Not at all. But it could simply be the case of the right reduced events happening at the right time that are often correlated together (so we see the global event as significant with our human brains).

That there's a subjective experience arising is another thing that so far, we can't touch, but through our research, we begun to gain an understanding of what the subjective experience is and is not... hopefully this will lead us to a mechanism for subjectivity (I don't have the slightest inkling how you would even begin to explain subjectivity with any more than story telling).
 
  • #92
Q_Goest said:
I think there’s a common desire for humans to believe that our feelings and emotions (our phenomenal experiences) actually make a difference. We want to believe we are not automatons, that we have “free will” and our experiences really matter. We intuitively feel that there is something different about our experience of the world and that of an automaton, and therefore, the computational paradigm must somehow be wrong.

That's not completely true. It goes the other way, as well. I posted a Human Behavioral Biology lecture series in the Biology forum (excellent series, you should really watch it if this kind of stuff interests you). The lecturer discusses the history of the debate between the southern US scientists, and the european marxist scientists at the time.

The US scientists were promoting this largely biosocial view in which everything was predetermined and wild nature and it's largely speculated that they had a political agenda to justify their behavior at the time. There was even an outbreak with angry Marxists shouting and screaming "There will be law!"

So there is an allure to the opposite effect which we have to be equally careful of. To take accountability away from criminals and tyrants, particularly (but I'm sure we've all, at some point, justified our own behavior in some small trivial way as "it's just who I am").
 
  • #93
Hi aperion, I honestly wish there was more to agree on. Anyway...
apeiron said:
The FEA approach you describe only works because the global constraints are taken as already in existence and so axiomatic. What does not change does not need to be mentioned when modelling.

So take a benard cell. An entropic gradient is presumed. The source and the sink are just there. The model does not seek to explain how this state of affairs developed, just what then happens as a consequence.

Order then arises at a critical temperature - the famous hexagonal cells. Local thermal jostlings magically become entrained in a global scale coherent motion.

Now these global scale cells do in fact exert a downward causal effect. As just said, they entrain the destinies of individual molecules of oil. This is what a dissipative structure is all about. Global constraints (the order of the flow) acting to reduce the local degrees of freedom (the random thermal jostle of the molecules become suddenly far less random, far more determined).

So benard cells are frequently cited as an example of self-organisation due to the "mysterious" development of global order.

There are other features we could remark on, like the fact that the whorls are hexagonal (roughly) rather than circular. The fact that the activity is confined (globally constrained) reduces even the "local degrees of freedom" of these benard cells. Circular vortexes are the unconstrained variety. Hexagonal ones are ones with extra global constraints enforced by a packing density.

Note too that the macro-order that the benard cell is so often used to illustrate is a highly delicate state. Turn the heat up a little and you have soon the usual transition to chaos proper - whorls of turbulence over all scales, and no more pretty hexagonal cells.
There are no "global constraints" in FEA unless you consider the local, causal influences "global". I honestly don't know what you mean by global constraints unless those are the boundary conditions on the overall physical system. See, we can just as easily extend the boundary in FEA, such as by extending the liquid pool out past the area where it is being heated, to form Benard cells. When we do that, everything stays the same. The boundaries on every element have only the local, causal forces (momentum exchange, conservation of energy, conservation of mass, gravitational field strength, etc...) ascribed to those boundaries, and those boundaries on every volume must be in dynamic equilibrium with every other volume and with the boundary on the system being modeled overall as if the boundary overall was just another layer of finite elements. FEA is truly an example of weak emergence as Bedau describes it in his paper.
 
  • #94
Hi Pythagorean
Pythagorean said:
I don't see what "downward causation" really means. Physically, it doesn't seem any different from constraints.
If you're not familiar with the term "downward causation", please read up on the topic. Here's a couple of papers I can recommend:
Chalmers, "Strong and Weak Emergence"
Emmeche et al, "Levels, Emergence, and Three Versions of Downward Causation"

From Chalmers
Downward causation means that higher-level phenomena are not only irreducible but also exert a causal efficacy of some sort. Such causation requires the formulation of basic principals which state that when certain high-level configurations occur, certain consequences will follow.

From Emmeche, strong downward causation is described as follows:
a given entity or process on a given level may causally inflict changes or effects on entities or processes on a lower level. ... This idea requires that the levels in question be sharply distinguished and autonomous...

Basically, it's saying that local level physical laws are over-ridden by these other physical laws that arise when certain higher level phenomena occur. I'm not sure what way "constraints" is being used in some of the contexts used here, but certainly, "strong downward causation" is something well defined and largely dismissed as being to much like "magic". Strong downward causation is largely refuted by everyone, at least on a 'classical' scale. There are some interesting concepts close to this that might apply at a molecular level, but that for another day.
 
  • #95
Q_Goest said:
There are no "global constraints" in FEA unless you consider the local, causal influences "global". I honestly don't know what you mean by global constraints unless those are the boundary conditions on the overall physical system. See, we can just as easily extend the boundary in FEA, such as by extending the liquid pool out past the area where it is being heated, to form Benard cells. When we do that, everything stays the same. The boundaries on every element have only the local, causal forces (momentum exchange, conservation of energy, conservation of mass, gravitational field strength, etc...) ascribed to those boundaries, and those boundaries on every volume must be in dynamic equilibrium with every other volume and with the boundary on the system being modeled overall as if the boundary overall was just another layer of finite elements. FEA is truly an example of weak emergence as Bedau describes it in his paper.

Yes, I am sure there is no way to change your mind here. But anyway, boundary conditions would be another name for global constraints of course.

Immediately, when challenged, you think about the way those boundary conditions can be changed without creating a change. Which defeats the whole purpose. The person making the change is not factored into your model as a boundary condition. And you started with a system already at equilibrium with its boundary conditions and found a way to move them so as not to change anything. (Well, expand the boundary too fast and it would cool and the cells would fall apart - but your imagination has already found a way not to have that happen because I am sure your experimenter has skillful control and does the job so smoothly that the cells never get destabilised).

So FEA as a perspective may see no global constraints. Which is no problem for certain classes of modelling, a big problem if you are using it as the worldview that motivates your philosophical arguments here.

And as I said, a big problem even if you just want to model complex systems such as life and mind.

As asked, I provided examples of how top-down constraints such as selective attention have been shown to alter local neural receptive fields and other aspects of their behaviour. You have yet to explain how this fits with your FEA perspective where this kind of hierarchical causality does not appear to exist.
 
  • #96
nismaratwork said:
Hmmmm... I like it... any good reading you could recommend?

Nonlinear optics: past, present, and future
Bloembergen, N.

Is what I found to answer your question (looking mostly at the history) which might be a goodd background to go and find your particular interests from. I think it really depends on your specific interest, but I've very little exposure to nonlinear optics.
 
  • #97
Q_Goest said:
Hi Pythagorean

If you're not familiar with the term "downward causation", please read up on the topic. Here's a couple of papers I can recommend:
Chalmers, "Strong and Weak Emergence"
Emmeche et al, "Levels, Emergence, and Three Versions of Downward Causation"

From ChalmersFrom Emmeche, strong downward causation is described as follows: Basically, it's saying that local level physical laws are over-ridden by these other physical laws that arise when certain higher level phenomena occur. I'm not sure what way "constraints" is being used in some of the contexts used here, but certainly, "strong downward causation" is something well defined and largely dismissed as being to much like "magic". Strong downward causation is largely refuted by everyone, at least on a 'classical' scale. There are some interesting concepts close to this that might apply at a molecular level, but that for another day.

Yes, I've seen the definitions, but my point was I guess, that I stand along side the people that think it's "magic". It seems rather mystical to me, which means either I don't understand it or it's bs. I chose to say I didn't understand it, I didn't mean that I didn't know the definition.

I can definitely accept that there's global behavior that doesn't occur at smaller scales (a water molecule does not manifest a wave). I work in systems that can be considered weakly emergent. It seems to me that it would take omniscience to judge strong emergence. Or a really simple and perfectly closed system (but then you're chance of even weak emergence dwindles). Otherwise you're ignoring the rather high probability (as dictated by history) that there's another aspect ("hidden variable"). It will take a lot of evidence to rule out the higher probability.
 
Last edited:
  • #98
Q_Goest said:
Basically, it's saying that local level physical laws are over-ridden by these other physical laws that arise when certain higher level phenomena occur. I'm not sure what way "constraints" is being used in some of the contexts used here, but certainly, "strong downward causation" is something well defined and largely dismissed as being to much like "magic". Strong downward causation is largely refuted by everyone, at least on a 'classical' scale. There are some interesting concepts close to this that might apply at a molecular level, but that for another day.

Chalmers and others might like to stress irreducibility, but that is not actually what I've been saying at all.

The argument is instead that both local and global causes are reducible to "something else". Which is where Peirce's logic of vagueness, etc, comes in.

So Q Goest is presenting sources and ideas he is familiar with, not the ones I am employing.
 
  • #99
apeiron said:
Chalmers and others might like to stress irreducibility, but that is not actually what I've been saying at all.

The argument is instead that both local and global causes are reducible to "something else". Which is where Peirce's logic of vagueness, etc, comes in.

So Q Goest is presenting sources and ideas he is familiar with, not the ones I am employing.

Good to know; this is what I mean by "hidden variable" but "variable" is too specific of a word and is attached to an irrelevant history. But this appears like weak emergence to me; I had the impression you were a proponent of strong emergence.
 
  • #100
Pythagorean said:
But this appears like weak emergence to me; I had the impression you were a proponent of strong emergence.

How much stronger can you get in saying everything emerges?

So mine is super-emergence. The uber, premium brand stuff! None of this namby pamby so-called strong stuff, let alone alone the wilting weak, that others want to palm off on you.
 
Back
Top