Does Neuroscience Challenge the Existence of Free Will?

  • Thread starter Thread starter Ken G
  • Start date Start date
  • Tags Tags
    Free will Neural
AI Thread Summary
The discussion centers on the implications of Benjamin Libet's research, which suggests that decisions occur in the brain before conscious awareness, raising questions about free will and determinism. Participants explore whether this indicates a conflict between determinism and free will, proposing that neurological processes may be deterministic while free will could exist in a non-physical realm. The conversation critiques the reductionist view that equates physical processes with determinism, arguing instead for a more nuanced understanding that includes complexity and chaos theory. The idea that conscious and unconscious processes are distinct is emphasized, with a call for a deeper exploration of how these processes interact in decision-making. The limitations of current neuroscience in fully understanding consciousness and free will are acknowledged, suggesting that a systems approach may be more effective than reductionist models. Overall, the debate highlights the complexity of free will, consciousness, and the deterministic nature of physical processes, advocating for a more integrated perspective that considers both neurological and philosophical dimensions.
  • #151
apeiron said:
Patently the reaction vector is not actually a symmetric entity. Instead it sums up all the contextual constraints that are found to impinge on the locale. If you push against the wall, then it is not just several square inches of wall that pushes back. It is the building, the planet to which it is attached, the gravity fields which affect the planet, etc.
I think you are making a lot of valid and insightful points, but I would see a need for correction in this one. Newton's third law is not that "the universe will conspire in a complex way to insure that every force results in a reaction force", it is that every single force that can ever happen comes in a pair, because forces are binary relationships, and it makes no difference what the rest of the universe is doing at the time. This is why it is not necessary for that square inch of the wall to be attached to anything in order for any force you put on it to meet with an equal and opposite force on your hand. If the wall is attached, it can remain stationary and provide that counter-force. If it is unattached, it will accelerate, and then its inertia will allow it to provide that counter-force. If the wall is unattached and has no inertia, then you cannot apply a force to it in the first place. This is why the ability to generate and experience forces is connected to inertia-- an inertialess charge, for example, would be singular in Newton's scheme, not because the universe would conspire in some complicated way to disallow it, but because that one single massless charge in the presence of an electric field yields a mathematical singularity all by itself within the Newtonian framework.

Indeed, it is often said that the third law is the reason for conservation of momentum, but when conservation of momentum is viewed as an axiom of translational symmetry, the logic flows in the opposite direction: the third law holds because of translational symmetry. So Newton's "reaction" concept is indeed a symmetry principle, as action without reaction would always have to break translational symmetry. That is what could only happen if the larger universe conspires to break that symmetry, so it is not the presence of the reaction, but its absence, that requires a larger universe to provide the necessary constraints.
 
Physics news on Phys.org
  • #152
apeiron said:
Causality is of course fundamental and freewill as near epiphenomenal as you can get :smile:. Causality would be our general or universal model of why anything happens (why even existence happens), while freewill is just some vanishingly rare, relatively impotent on the cosmic scale, feature of a complex system.
That is certainly a commonly adopted stance, but I would like to suggest another angle. I think that everything we think about how reality "works" is subordinated to how we interact with reality. And how we interact with reality is subordinated to how our brain works, including free will. So free will should not be expected to emerge from our study of reality, it should emerge (if at all) from our study of our brains.

So then the obvious conundrum emerges-- is our brain not a part of reality, so will not the same techniques that worked on "external" reality work on the brain? Maybe yes, and maybe no-- that's the point. The way we learn about reality is so caught up in the functioning of our brains that it is no longer obvious "which side of the microscope" the brain is on. We have no guarantee that the way we think about causation for, say, a charge in a field, will help us understand how the charges in our brains help us think-- it's just an article of faith that the kinds of questions that are pertinent will be the same.

Now, I have no other suggestions other than to apply the same techniques and ask the same types of questions, perhaps on a more sophisticated level (like the tennis match between up-down and down-up causation that you have been advocating), or on the reductionist level that has also been discussed. I'm just saying we should not start out with the assertion that this must lead us to the most fundamental results-- it may lead us down a dead-end street. A brain trying to understand a brain might be like a puppy chasing its tail, and when a puppy chases its tail, it has no idea why the tail keeps moving just out of reach every time the pupply makes a lunge at it, so it keeps on lunging, because lunging has worked so well on everything else.
In philosophy, the fundamental is the general. In physics, it is generally taken to be the smallest scale - which is why atomistic reductionism is the driving idea.
Yes, and it is remarkable that quantum mechanics obeys the correspondence principle-- there is nothing that appears to emerge in simple dynamical systems when passing from the quantum to the macro domain that invalidates the quantum analysis, it merely renders the quantum analysis inelegant. But as we both agree, that may be because of the way the problems are "rigged" to obey the correspondence principle from the outset-- the correspondence principle may not be a principle about reality, it may be a principle about physics, or how physics is generally done.
And when it comes to identify these general principles or universals, philosophy finds that they are always dichotomies or complementary/synergistic/asymmetric pairs.

So as well as the local, there is the global. As well as the discrete, there is the continuous. As well as flux, there is stasis. As well as chance, there is necessity, etc.

Which is why it is no surprise that causality itself is dualised. As well as bottom-up construction, there is top-down constraint.
The tennis match. Indeed I have long felt the "yin-yang" symbolism of eastern philosophy was one of the most profound concepts the human mind has ever developed-- the importance of both apparent contrast and deeper unity in generating understanding.
 
  • #153
Ken G said:
I think you are making a lot of valid and insightful points, but I would see a need for correction in this one. Newton's third law is not that "the universe will conspire in a complex way to insure that every force results in a reaction force", it is that every single force that can ever happen comes in a pair, because forces are binary relationships, and it makes no difference what the rest of the universe is doing at the time. This is why it is not necessary for that square inch of the wall to be attached to anything in order for any force you put on it to meet with an equal and opposite force on your hand. If the wall is attached, it can remain stationary and provide that counter-force. If it is unattached, it will accelerate, and then its inertia will allow it to provide that counter-force. If the wall is unattached and has no inertia, then you cannot apply a force to it in the first place. This is why the ability to generate and experience forces is connected to inertia-- an inertialess charge, for example, would be singular in Newton's scheme, not because the universe would conspire in some complicated way to disallow it, but because that one single massless charge in the presence of an electric field yields a mathematical singularity all by itself within the Newtonian framework.

Not true. If the wall is not attached to anything, it just means you are having to exert less of an accelerative force and so the contextual back-reaction is matchingly less. All you have to overcome is the resistance of surrounding air molecules.

And it the wall "doesn't move", then the acceleration you exert will produce heat and noise. Equilibration is still happening.

If you and the wall are in space, then it becomes even clearer that who pushed who is a perfectly symmetrical question so far as the laws of physics are concerned.

I'm not sure what you are trying to say about inertia, but the whole point of an inertial body is that it is at equilibrium with the world. There is nothing acting on it to change its state of motion. There is no action, and so no re-action needed to re-equilibrate our local~global view of the body.

Ken G said:
So Newton's "reaction" concept is indeed a symmetry principle, as action without reaction would always have to break translational symmetry. That is what could only happen if the larger universe conspires to break that symmetry, so it is not the presence of the reaction, but its absence, that requires a larger universe to provide the necessary constraints.

Err, it is the model that reduces the description of the world to a set of symmetries. That is how the actual entangled messy dynamism of the world can get abstracted away.

Science seeks the equilbrium stories because then it needs only measure the macrostate - the global constraints - and can ignore the confusion of local detail. The ontological mistake is then to call those robust and stable macrostates (like spacetime with its "inherent" symmetries) the fundamental ground of things.
 
  • #154
Ken G said:
That is certainly a commonly adopted stance, but I would like to suggest another angle. I think that everything we think about how reality "works" is subordinated to how we interact with reality. And how we interact with reality is subordinated to how our brain works, including free will. So free will should not be expected to emerge from our study of reality, it should emerge (if at all) from our study of our brains.

What you might like as the way out of this conundrum are the philosophies offered by CS Peirce and Robert Rosen.

Peirce starts completely from "inside subjectivity" and works his way out to an objective description of reality in a reasoned fashion. It is very important that in logic he places abduction as prior to even induction and deduction.

And Rosen (a theoretical biologist who died about 10 years back) wrote about modelling relations theory. This is ultimately a theory of mind as the mind is a modelling system.

Ken G said:
The tennis match. Indeed I have long felt the "yin-yang" symbolism of eastern philosophy was one of the most profound concepts the human mind has ever developed-- the importance of both apparent contrast and deeper unity in generating understanding.

A good book here is Joanna Macy's Mutual Causality in Buddhism and Systems Theory.

But I have to say that yin yang is a very undeveloped logic. The ancient greeks did this systems view of logic much better (see Anaximander, then Aristotle). Of course, it is an open question whether the greeks inspired the Taoists or the other way round as both views arose around the same time.

See for instance...http://arxiv.org/abs/physics/0309104
 
  • #155
apeiron said:
What you might like as the way out of this conundrum are the philosophies offered by CS Peirce and Robert Rosen.

Peirce starts completely from "inside subjectivity" and works his way out to an objective description of reality in a reasoned fashion. It is very important that in logic he places abduction as prior to even induction and deduction.

And Rosen (a theoretical biologist who died about 10 years back) wrote about modelling relations theory. This is ultimately a theory of mind as the mind is a modelling system.



A good book here is Joanna Macy's Mutual Causality in Buddhism and Systems Theory.

But I have to say that yin yang is a very undeveloped logic. The ancient greeks did this systems view of logic much better (see Anaximander, then Aristotle). Of course, it is an open question whether the greeks inspired the Taoists or the other way round as both views arose around the same time.

See for instance...http://arxiv.org/abs/physics/0309104

Hmmm... good reading... do you have any more along these lines? I got a Kindle, and I'm in a book-buying mood... a wide range would be best.
 
  • #156
apeiron said:
Not true. If the wall is not attached to anything, it just means you are having to exert less of an accelerative force and so the contextual back-reaction is matchingly less. All you have to overcome is the resistance of surrounding air molecules.
Not air resistance, inertia. And the force I can exert may have more to do with my physique than whether or not the wall is attached. My point is merely that no matter what the rest of the universe is doing, action/reaction is fundamental in Newton's system. The rest of the universe may get some input into how much force I can exert, that is a complex issue, but it doesn't get any say as to whether or not the force I can exert will be met with an equal reaction force, that is always true even in a universe of just me and the wall. This issue is actually a purely reductionist triumph-- if you analyze the force in terms of a bunch of pieces interacting with each other, and each of those pieces obeys Newton's third law (as is postulated in that system), then the whole will also.
I'm not sure what you are trying to say about inertia, but the whole point of an inertial body is that it is at equilibrium with the world.
Inertia just means mass, it doesn't mean not accelerating-- that's "inertial." Why those terms are used like that, I have no idea.

Err, it is the model that reduces the description of the world to a set of symmetries. That is how the actual entangled messy dynamism of the world can get abstracted away.
Yes, the model is an abstraction. It involves a background against which a universe can exist, and the background is translationally invariant. That means you could take that same universe and translate it, with no effect, as long as you translate everything. If you only translate part, then the "everything else" becomes a place you can put unbalanced forces and momenta, such that the part you are dealing with won't have an action/reaction principle. So if you don't see action/reaction working, then it implies (in this model system) that you are not dealing with the whole universe-- the presence of an "external" universe makes it presence known in the violation of Newton's third law, not in its enforcement.
Science seeks the equilbrium stories because then it needs only measure the macrostate - the global constraints - and can ignore the confusion of local detail. The ontological mistake is then to call those robust and stable macrostates (like spacetime with its "inherent" symmetries) the fundamental ground of things.
And indeed Newton's laws are found wanting for just this reason.
 
  • #157
@Ken G: I'd taking readng tips from you too...
 
  • #158
apeiron said:
What you might like as the way out of this conundrum are the philosophies offered by CS Peirce and Robert Rosen.

Peirce starts completely from "inside subjectivity" and works his way out to an objective description of reality in a reasoned fashion. It is very important that in logic he places abduction as prior to even induction and deduction.
That does sound interesting. I must confess I had never even heard of "abduction" (other than the alien version) until I googled it, and it is indeed an important part of formal reasoning-- especially scientific reasoning. Indeed, I can see how easy it would be to argue that deduction and induction are just opposite extreme forms of abduction. This also gibes with the issue of truth vs. meaning that came up earlier: logic is often thought of as the arena for establishing syntactic truth, whereas experience is the arena of meaning. Induction and deduction are syntactic, a computer could be programmed to recognize them, but abduction would seem to straddle the domains of truth and meaning, sacrificing a formal stance in either realm in exchange for the ability to cross their boundaries. Fuzzy logic.
But I have to say that yin yang is a very undeveloped logic. The ancient greeks did this systems view of logic much better (see Anaximander, then Aristotle).
I'm not sure I would consider yin/yang a form of logic at all-- perhaps the Greeks took a different turn when they explored the power of logic. Indeed that may be the fundamental turn that distinguishes western vs. eastern thinking-- form vs. function, reason vs. introspection, consistency vs. contradiction. The Greeks gained great powers by banishing contradiction, and it has taken thousands of years to "play out the string" they started. But something might have gotten left behind, something that must someday be confronted in a theory of mind. Was yin/yang left behind for being underdeveloped, or just too far ahead of its time?
 
  • #159
nismaratwork said:
@Ken G: I'd taking readng tips from you too...
I'm not as adept at tracking my sources of inspiration-- my thoughts come from a mish-mash of ideas I've been exposed to, including by people such as those on this thread. Probably the usual cast of characters in physics philosophy: Feynman, Wheeler, Wittgenstein, Penrose, Einstein, Bohr, Heisenberg, etc. Some of whom claim to be "shut up and calculate" types, a claim I never pay any attention to. :)
 
  • #160
Ken G said:
I'm not as adept at tracking my sources of inspiration-- my thoughts come from a mish-mash of ideas I've been exposed to, including by people such as those on this thread. Probably the usual cast of characters in physics philosophy: Feynman, Wheeler, Wittgenstein, Penrose, Einstein, Bohr, Heisenberg, etc. Some of whom claim to be "shut up and calculate" types, a claim I never pay any attention to. :)

I don't mind, although it seems we're already fans of similar authors. Thanks very much Ken G, and if you think of anything later, just drop me a PM... I'm always hunting for reading material.
 
  • #161
Ken G said:
Was yin/yang left behind for being underdeveloped, or just too far ahead of its time?

Or both - as first we had to work out reductionism, now we can go back to the project of holism.

I am just describing my own experience really. I learned the modern view of systems first. Then heard about Peirce. Then discovered that Anaximander, the first real philospher, had with surprising completeness got the whole essential systems story just about. And Aristotle - if you are reading him with a systems eye - was in fact struggling to marry the two perspectives. Anaximander's systems thinking and the later equally compelling worldview of atomism.

This is not of course how many people are taught Aristotle. History is told by the winners and so the whole of Ancient Greek philosophy 101 is about how these old fools lurched from one metaphysical extreme to another.

The early philosophers sought the fundamental substance (was it air, water, the apeiron?). Plato said no the fundamental was form (his version of substance, chora, barely gets mentioned). Heraclitus said all was flux (actually no, his view was more complex) and Parmenides bamboozled them by arguing there was only stasis, the impossibility of actual change. And instead of the illusory many, just the perfect one.

So it goes on. Every step of the metaphysical development hinged on discovering nature's dichotomies, but modern reductionism demands the story be taught as a series of monistic turns of thought.

So what I am saying is there is some stunningly well worked out systems theory in ancient greek philosophy. But no one really tells that tale.

The best academic account of Anaximander's philosophy is Anaximander and the Origins of Greek Cosmology by Charles Kahn - pretty dry of course.
 
  • #162
apeiron said:
Or both - as first we had to work out reductionism, now we can go back to the project of holism.

I am just describing my own experience really. I learned the modern view of systems first. Then heard about Peirce. Then discovered that Anaximander, the first real philospher, had with surprising completeness got the whole essential systems story just about. And Aristotle - if you are reading him with a systems eye - was in fact struggling to marry the two perspectives. Anaximander's systems thinking and the later equally compelling worldview of atomism.

This is not of course how many people are taught Aristotle. History is told by the winners and so the whole of Ancient Greek philosophy 101 is about how these old fools lurched from one metaphysical extreme to another.

The early philosophers sought the fundamental substance (was it air, water, the apeiron?). Plato said no the fundamental was form (his version of substance, chora, barely gets mentioned). Heraclitus said all was flux (actually no, his view was more complex) and Parmenides bamboozled them by arguing there was only stasis, the impossibility of actual change. And instead of the illusory many, just the perfect one.

So it goes on. Every step of the metaphysical development hinged on discovering nature's dichotomies, but modern reductionism demands the story be taught as a series of monistic turns of thought.

So what I am saying is there is some stunningly well worked out systems theory in ancient greek philosophy. But no one really tells that tale.

The best academic account of Anaximander's philosophy is Anaximander and the Origins of Greek Cosmology by Charles Kahn - pretty dry of course.

Dry is fine as long as it's informative... I feel cheated by my education, which fell to precisely the traps and tropes you describe.
 
  • #163
Ken G said:
Not air resistance, inertia.

Inertia? What is holding together the atoms of this bit of wall we are pushing about. There is a network of electrostatic bonds with an internal equilibrium to assert. We can only maintain a fiction of a localised reaction vector because the bit of wall does not fly apart into its atoms with the shove. If the wall did atomise, then we would have to chase after all the individual stories represented by the flying atoms (the now widely scattered "inertia").

You are of course familiar with the Machian mechanics debate as well?

Inertia just means mass, it doesn't mean not accelerating-- that's "inertial." Why those terms are used like that, I have no idea.

Inertia and mass do appear tied together. Though how this is the case is not completely straightened out (the Higgs mechanism is remarkably contextual wouldn't you say? :devil:)

And indeed Newton's laws are found wanting for just this reason.

You mean relativity fixed things by stepping back to more general symmetries - ones that could include spacetime as well its massive events.

So exactly as I have argued, Newton took his boundary conditions as static, eternal, uninvolved. This could not last. Einstein removed those specific constraints to model reality at a more general level - ones where those local values (the stress-energy tensor values) that make spacetime flat or empty have to be put back into determine the state of the geometry.

Newton's model was so constrained that it lacked flexibility. Einstein's model was less constrained and so constraints could be added back in as a choice. But it is still all the same trick - mechanics. You equilbrate away your global constraints to arrive at a model based on global symmetries. Then you add stuff back into this globally inanimate picture to animate it locally as required.

Talking about excellent reads, The Evolution of Physics by Einstein and Infeld is a great insight into how the mechanical view developed.

But there are thousands of must-read books. Sigh.
 
  • #164
Q_Goest

in addition to what was already said, you might want to consider what you know about a particular brand of differential equations in which the differentiation is no longer an integer value (so we can the 5.4th derivative instead of the 5th or 6th). If you have a network of "cells" and you couple them with such a term, it's not classical diffusion anymore, but it's still valid.

In this case, we no longer have nearest neighbor influences only. Each member of the system can now depend on whole system's global state rather than nearest neighbor. It has been shown recently that the diffusive process in highly turbulent systems is better described by this non-classical diffusive process (and we're still not talking about QM despite it being non-classical).

And of course, via my argument from before, these are open systems in reality, so you can introduce any kind of driving/forcing term you want to represent particular global effects. You're not going to be able to take all the different models describing different aspects and put them all together without conflicts and inconsistencies. They're models; they only work for what they were designed.
 
  • #165
apeiron said:
I learned the modern view of systems first. Then heard about Peirce. Then discovered that Anaximander, the first real philospher, had with surprising completeness got the whole essential systems story just about. And Aristotle - if you are reading him with a systems eye - was in fact struggling to marry the two perspectives. Anaximander's systems thinking and the later equally compelling worldview of atomism.
I hadn't heard much about Anaximander, I'll have to find out more. I've long been interested in Parmenides and Zeno, and how they tried to invent a form of logic that could tell them things about reality, even if it told them the reality they recognize is an illusion. Amazingly, it kind of worked, as modern physics has found some remarkable synergy with the logical impossibility of change-- and the modern "quantum Zeno effect" must be some kind of record for the longest time between an idea and it's experimental confirmation, even if in an unanticipated way.
The early philosophers sought the fundamental substance (was it air, water, the apeiron?).
Which explains your handle...
So what I am saying is there is some stunningly well worked out systems theory in ancient greek philosophy. But no one really tells that tale.
I'm often struck by how many of the great questions they anticipated. It's almost impossible to find territory they didn't touch on somewhere. Ironically, they end up getting bashed for it-- so many questions, so few answers. People don't understand the most important thing philosophy does is map the terrain, you have to find your own destinations.
 
  • #166
apeiron said:
Inertia? What is holding together the atoms of this bit of wall we are pushing about. There is a network of electrostatic bonds with an internal equilibrium to assert. We can only maintain a fiction of a localised reaction vector because the bit of wall does not fly apart into its atoms with the shove.
If the wall flies apart, the reaction vector might not be localizable into a single one, but there will still be reaction vectors. The forces that appear will depend on the larger context, but not the presence of reaction vectors-- in Newton's; scheme, every force always comes in pairs, whether the substance shatters or not. The internal forces of which you speak only affect the global context that determines how the forces play out, but not their coming in pairs-- the latter is purely reductionist, it's a sum of parts.
If the wall did atomise, then we would have to chase after all the individual stories represented by the flying atoms (the now widely scattered "inertia").
True, it would not be easy to make an accounting of all the action/reaction pairs there. But the Newtonian system says that they would be there all the same, and no matter how you group up the halves of the action/reaction pairs microscopically, the macroscopic result will always also be an action/reaction pair, because it is a simple sum.
You are of course familiar with the Machian mechanics debate as well?
Yes, the idea that mass there provides inertia here. But we're in the Newtonian system here-- explaining the source of inertia is not included!

Inertia and mass do appear tied together. Though how this is the case is not completely straightened out (the Higgs mechanism is remarkably contextual wouldn't you say? :devil:)
I'm not even sure the Higgs mechanism explains it.
So exactly as I have argued, Newton took his boundary conditions as static, eternal, uninvolved.
Yes, that's true, I think your point is valid that Einstein's view is more of a systems view, because the spacetime backdrop is itself embroiled in the action. In fact, that's not the end of it, because Einstein generates differential equations, so are open to the need for additional boundary conditions-- more systems. I was just saying that the law of action/reaction percolates up from the atomistic foundation of Newton's approach, it's bottom-up. Had Newton been completely right, there wouldn't be much room for a systems approach to any aspect of physical reality. Perhaps an Anaximander fan in Newton's own day could have been skeptical that reality could exhibit rich phenomena, like conscious choice, in such a sterile scheme, but that was Newton's scheme all the same.
 
Last edited:
  • #167
Ken G said:
If the wall flies apart, the reaction vector might not be localizable into a single one, but there will still be reaction vectors.

I'm not sure whether you are agreeing or not. How do you account for a rocket ship for example? Do you try to sum up a bunch of tiny force vectors for the hot plume of combusting gas, or just go with the simpler single vector for the mass flow rate?

But the point I was making was about how the systems view is actually smuggled into mechanics.

Action~reaction is an example of how a global constraint (the presumption of energy conservation) has a downward causal effect(!) on the locales of a Newtonian system. That global symmetry entails the local ones. So any time something is seen to happen (an acceleration), there has to be a localised re-equilbration, a local conservation of energy. And this justifies the very simple approach of representing the situation as a pair of identical cancelling force vectors.

The same mechanical trick is repeated elsewhere.

In GR, of course, the law of conservation is not hardwired in as a global symmetry of the model. Instead, it has to be built in as a futher constraint - such as by specifying an inertial reference frame. So GR relaxes a global constraint to make the baseline model all flexxy, then allows you to put back in the constraints by hand to stiffen it up again and enforce a behaviour on a systems locales.

In QM, we have the problem of fluctuations - potential actions without a cancelling reaction. People start to think we might be able to turn the zero point energy into a perpetual motion machine! But fluctuations are tamed by the toy trick of virtual pair production. We say no, global conservation of energy still must rule. So what is actually happening down there (wink, wink) is that the vacuum is producing self-cancelling particle pairs.

The same with super-conductors. A real headache to model until BCS and the pretence that electrons joined hands to dash about as coupled bosonic pairs.

Mechanics is the sub-set of systems theory where the global constraints are treated as an equilbrium state that enforces also a local equilbrium. Once you have got the mechanics set up like this, a baseline view founded on a pervasive symmetry, then you can start modelling the propagation of change as symmetry breakings.

If the symmetry breakings are localised in some way, then you get a kind of quasi particle description of nature. If the symmetry breaking is global, well you get the big bang, the thermal model of time, etc.

(Talking of essential books again, Robert Laughlin's A Different Universe is a great polemic against the currently dominant reductionist mindset of physics).
 
  • #168
apeiron said:
I'm not sure whether you are agreeing or not. How do you account for a rocket ship for example? Do you try to sum up a bunch of tiny force vectors for the hot plume of combusting gas, or just go with the simpler single vector for the mass flow rate?
You won't do the former, but you could-- the latter is the sum of the former parts. That is why you know the latter will be an action/reaction, because its pieces are. Newton's prescription is purely reductionist, it involves identifying the fundamental binary interactions, one by one, and summing them up, so any global conservation law stems from rules about those fundamental interactions. Later, physicists were able to see that the origin of these rules could also be viewed as global constraints (symmetries and conservation laws), but the reason the rules could be expected to apply to the fundamental interactions was the quintessentially reductionist principle that every interaction played out exactly the same whether its elements were part of a larger system, or if they were themselves the entire universe, individually subject to the global constraints. The universe in the head of a pin-- that is a fact about all of the force laws invoked in Newton's program.

Now, I think you are arguing that this is more of a bug than a feature, because the idea that every interaction in a complex system must be the same as it would have been had the elements been the whole universe misses out on how global constraints back-react on the elements. I think that's a valid point, but it is something quite missing from Newton's scheme-- it's not the reason behind the third law, all his laws are blind to it, and all would work without it. In a universe where nothing at all happened that was not understandable in a purely reductionist way, Newton's laws could describe it all (never mind relativity and quantum mechanics, those are detailed breakdowns of Newton's laws-- what you have in mind is a much more fundamental lacking element).

So I'm not disputing your point that conceptualizing all interactions as binary action/reactions at a distance misses what is going on at the systems level, I'm just saying that none of Newton's laws require a systems level to operate, they are perfectly self-consistent reductionist laws. I think that's what made them so seductive. The reason we have a systems level is not because we need it to get Newton's third law, it's that we need it to get the larger context of what richer type of behavior is possible than Newton's three laws. People who point to those three laws and say, "but I can get everything the system is doing just from those laws, oh and some appropriate boundary conditions and perhaps an externally applied time-varying field because I know that I'm going to need all that" are just ignoring how their fingerprints are all over the result, shoehorning the systems-level behaviors like a Greek playwright invokes the obligatory "deus ex machina" to make the end play out as desired. That's the part I agree with, the important part of the case you are making.

In GR, of course, the law of conservation is not hardwired in as a global symmetry of the model. Instead, it has to be built in as a futher constraint - such as by specifying an inertial reference frame. So GR relaxes a global constraint to make the baseline model all flexxy, then allows you to put back in the constraints by hand to stiffen it up again and enforce a behaviour on a systems locales.
Yes, it's that deus ex machina again.
Mechanics is the sub-set of systems theory where the global constraints are treated as an equilbrium state that enforces also a local equilbrium. Once you have got the mechanics set up like this, a baseline view founded on a pervasive symmetry, then you can start modelling the propagation of change as symmetry breakings.
The way I would frame that is, that's what mechanics looks like from the systems perspective. Mechanics can be set up from the reductionist perspective instead, which it normally is, but the advantage of seeing it from the systems perspective is that it immediately empowers you to relax the constraints of the model to encompass system-like behavior when you want to do that. Again it's the seductiveness of reductionism that shuns relaxing those constraints, almost like a person in deep water wishing to hold tight to the flotation device.
If the symmetry breakings are localised in some way, then you get a kind of quasi particle description of nature. If the symmetry breaking is global, well you get the big bang, the thermal model of time, etc.
Yes I think that's a useful insight, so I'll let this stand repeating:
(Talking of essential books again, Robert Laughlin's A Different Universe is a great polemic against the currently dominant reductionist mindset of physics).
 
  • #169
More books, and more debate... if I could somehow express clapping my hands together in girlish glee (a disturbing sight I assure you), I would.
 
  • #170
Hi Ferris,
Ferris_bg said:
If we want a mental event M1 to cause a physical event P2 and if we want the causal status of the mental to derive from the causal status of its physical realizer P1 (so that the theory doesn't fall in the substance dualist category) we are faced with over-determination (P2 could be realized by M1, as well as by P1 alone). If there are no greater causal powers that magically emerge at the higher level of M1 (if we want the theory to stay a materialistic one) then the causal powers of M1 are identical to the causal powers of P1, which means that P1 is the only realizer of P2, thus M1 becomes epiphenomenal. You can read more about this http://www.iep.utm.edu/mult-rea/#H4".

So, in the materialistic view you can either have mental causation identical with the physical causation or you can embrace epiphenomenalism and qualia. In both ways free will is impossible. If you want to find free will, you must seek it outside the materialistic domain.

Q_Goest,

In your post https://www.physicsforums.com/showpost.php?p=3179362&postcount=90" you say you don't believe in the phenomenal-physical correlation and basically you reject epiphenomenalism. And at first it doesn't looks logic, how can one make a knowledge claim about consciousness if it's epiphenomenal? But does the agent's association of the conscious experience of some event and its labeled state in the brain contradict in any way? The definition of the word "consciousness" in the brain state is not associated with the experience of it, but does this interfere the brain to be able to label certain physical state? Think about it, how will you explain the word "consciousness" to a little boy and what association does his brain make. For me epiphenomenalism implies that in exactly every millisecond your brain takes the optimal decision based on the available information. Even when you do something anti-evolutionary (take a lot of drugs, commit a suicide) it must be somehow justified in your brain calculations. Because if it's not, epiphenomenalism is wrong (remember you don't have taken the drug because YOU liked it, but because your BRAIN liked it).

I’m not too sure I really understand your point exactly, but I think you’d like to discuss the knowledge paradox a bit and I think your previous post to Ken is a perfect lead into that paradox. Note that I’m not presenting my opinions as much as I’m trying to maintain logical rigor here. I’m not suggesting that I reject or embrace epiphenomenalism. I’d like to point out one issue the epiphenomenalist argument must address but at this point there seems to be a fault in the logic.

Frank Jackson (Epiphenomenal Qualia) is a highly cited paper, being reference over 1000 times. He makes the argument that phenomenal properties such as qualia are not phenomena that can be described by describing the physical information on which the phenomena supervenes. I think this is a perfectly clear and legitamate argument. For example, we can describe physical information about any given physical phenomena such as how fast a guitar string vibrates, the tension in it, pressure waves created in the air, the vibrational interaction between the string and guitar, or how the bonds within the steel string change length as the string stretches and how the mass and inertia of the string causes it to move at a given frequency over time. Once we describe ALL the physical information about how the guitar works we’ve exhausted all there is to describe and there really is nothing left. How any physical system changes over time can be described by describing the physical information, and once that is done, there is nothing left to describe because we’ve fully described everything. But as Jackson points out, for what we call qualia or phenomenal consciousness, we haven’t described THOSE phenomena. A hypothetical neuroscientist might be able to describe everything there is to know about our nervous system, how our brains work, how neurons interact, how glia support neurons, now neurotransmitters work on a molecular level, etc… but we still haven’t described what the guitar sounds like, how a rose smells, or any other phenomenal property.

Let’s take the lead you provided from Kim regarding mental states (M) and physical states (P). For the causal closure of the physical, there are physical events P that determine other physical events. The mental events M are supervenient on the physical states but they don’t cause physical states. What causes physical states, assuming the causal closure of the physical, are other physical states. So the hypothetical neuroscientist that knows everything there is to know about our nervous system, can tell you what physical state P2 will follow physical state P1 (or what set of potential physical states will follow P1 if there is some random nature to them). Mental states that are described as phenomenal states are therefore epiphenomenal on the physical state. The mental state doesn’t cause the physical state, the physical states are caused by other physical states.

Epiphenomenal however, means that not only do these mental states not cause physical states, they also don’t influence them. They don’t have any way of influencing or causing a change in a physical state. If mental states were being “measured” by the physical state, they would no longer be epiphenomenal, they would suddenly become part of the causal chain that created the following physical state, so epiphenomenal in this regard means they really have no influence whatsoever over any physical state. So the paradox is, how do we know these states exist? The only reason given is that there is a 1 to 1 relationship between P and M, but that means we aren’t saying that we experience qualia because we actually experience that qualia. It says we are saying we experience something because of the physical states that cause us to utter those words.

Shoemaker, “Functionalism and Qualia” 1975:
To hold that it is logically possible (or, worse, nomologically possible) that a state lacking qualitative character should be functionally identical to a state having qualitative character is to make qualitative character irrelevant both to what we can take ourselves to know in knowing about the mental states of others, and also to what we can take ourselves to know in knowing about our own mental states.

Rosenberg, “A Place for Consciousness”:
Shoemaker is worried that, if functionalism is false (and certainly if physicalism is false), the relations between brain states and conscious states will be accidental in that the qualia involved in consciousness would make no contribution to determining our brain states. Because our brain states drive our behavior, including our knowledge claims, it seems that qualia would be irrelevant to what we could or could not claim to know.

...

Given that we are capable of making knowledge claims about consciousness, we need to understand how consciousness could be relevant to the production of those claims. To connect consciousness to the production of our claims about it, somewhere in our explanation of our knowledge we will need to appeal to the effects of consciousness on brain states. Now these brain states are solidly physical, and we are assuming the causal closure of the physical meaning that nothing nonphysical can make a causal difference. But if consciousness cannot affect brain states, it cannot play any part in producing our claims about it, and so it seems that we could not really know about consciousness. Yet we do know about it. Hence, Liberal Naturalism is caught in a paradox.

I am stating the intuitive problem. The Liberal Naturalist seems committed to conceding that consciousness makes no contribution to the fact that we make the claims about it that we do, and that is deeply troubling. Because any accuracy in our claims about it would seemingly be based on fortuitous coincidence, it seems impossible that we could know about it.
 
Last edited by a moderator:
  • #171
Hi Pythagorean,
Pythagorean said:
But think about this: let's say you have some giant system of N differential equations to describe the whole universe. You have every single interaction reduced to a handful of variables. Now all you need to do is put in your initial conditions for those variables.

What do you do? Your theory already accounts for everything in the universe, yet your theory doesn't account for how the initial conditions arose. Do you make the initial conditions a function of some part of the system? So now there was always this loop and never a beginning or end? I'm puzzled, personally, I have no idea what I'd do.

Anyway, I'm hoping this demonstrates that the science and the philosophy are completely different, just like models and reality. As another examples, we know that quantum mechanics underlies all classical observations, yet we naively model things in the old classical view. Why? Because it's effective, it's productive, it works. This is not the same way I approach the problem in a philosophical setting.
Not sure what the beginning of the universe has to do with. We don't have a theory of how the universe began, so let's not even consider it.

Regarding models and reality, it seems very confusing to me why you seem to feel that the philosophy of weak emergence is a "model" and not reality, but some sort of strong emergence or downward causation is not the model but is reality itself. Also, I can't help but wonder if your views of the dynamic systems approach are actually the same as those of the published work or not and if you really understand what the issue is. For example, do you think molecules in a fluid (say Benard cells for example) are causally affected not only by their local interactions with other molecules and with gravity but also causally affected by what occurs in distant parts of the fluid? What limits how quickly the molecules of a fluid at one point can affect molecules at a distant point? And what do you think is meant by the commonly used phrase, "the whole is greater than the sum of the parts"?
 
  • #172
Thanks for that clear description of what epiphenomenalism, and the "knowledge paradox", are all about. I don't think that Rosenberg's logic on the issue is solid. For example, when he says "But if consciousness cannot affect brain states, it cannot play any part in producing our claims about it, and so it seems that we could not really know about consciousness", there would seem to be a magical step in his argument: the step where he connects having knowledge about consciousness to having consciousness. I don't see those things as necessarily the same, so I see no paradox to the stance that a consciousness could emerge from a physical state, and knowledge about consciousness could also emerge from that same physical state. Indeed, that would seem to be an inevitable aspect of any flavor of physicalism. If both emerge from the same physical state, there is clearly no paradox in both having consciousness, and having knowledge about consciousness, without either affecting or altering the physical state at all.

But I don't buy it for other reasons. To me, physicalism, and functionalism, are both examples of putting the cart before the horse. We don't develop a physicalist, or a functionalist, viewpoint because we have any evidence that the universe really works that way, we do it simply because it succeeds in accomplishing the goals we have set out for the process. In a nutshell, if we establish physicalist goals, then physicalism is the path that leads there most economically. But whence comes the idea that this somehow means everything is physical? It's just bad logic to claim that follows, though one is certainly welcome to adopt it as a personal belief system, as with any religion.

However, a more natural stance, it seems to me, is that if one is interested in a physical question, like what are the neural correlates of some qualia, one should adopt a physicalist perspective, as one will not know where the blood is going by introspecting. But if one is interested in an epiphenomenal question, like what does pain feel like, one should adopt an epiphenomenal perspective, like introspection on the issue, because one will never know by watching blood flow what pain feels like.

As for which leads to which, again I see no evidence that a physical state leads to the qualia associated with it. Instead, it seems natural that both the physical state, and the epiphenomenal qualia, derive from something else, something we might consider to be what is Actually Happening There.

The physical correlates of the mental state are nothing but answers to a particular type of question about that state, and the idea that they are what "leads to" the mental state is an error in language, in my view. That's because what language is, above all, is connecting things to our experience. That's it, that's all language ever does-- it connects a phenomenon to our experience. What else can language do? Now, if our experience is always, at some level, a qualia, then the idea that physical states lead to the qualia has the situation exactly backward. Instead, when we speak of particles and potentials, we are using language, which connects to the qualia they trigger. We manipulate the qualia in terms of rules, or laws of physics, and generate outcomes, which are also qualia, which we then translate back to something we can compare to an experimental outcome, which is also a qualia. Somehow, in all that process, we are left with the idea that the qualia are derived from the physical states, but without the experience of qualia, the language we use to even talk about the physical states has no meaning at all. So it would seem that it is the qualia that lead to the physical states, in the sense that epiphenomenalism predates physicalism. The physical states do connect via a concept of causation, as Q_Goest described above, but as Hume so famously put it, we have no idea what causation is other than the observed tendency for one thing to follow another. In short, there is an algebra of qualia where one often follows another, and if we frame that in physicalist language, we can gain power over those causal connections whose origin is so mysterious to us.

In short, I would say the knowledge paradox has things backward-- the question is not, how can qualia matter if they never affect the physical states, the question is, how can what is physical matter if everything that we know about the physical is derived from our ability to experience qualia that we associate with a physical world? Instead of the qualia being a figment of our imagination, it is much easier to argue that the physical world we use the qualia to imagine is the thing we are imagining, albeit an imagination that follows rules outside our control. In short, the physical world is the thing we imagine when we want to imagine things that follow rules and are predictable, and what we call "our imagination" are the things we imagine when we don't impose that requirement.
 
Last edited:
  • #173
Let's not derive from the main topic of the thread a lot with the knowledge paradox.
Q_Goest said:
The only reason given is that there is a 1 to 1 relationship between P and M, but that means we aren’t saying that we experience qualia because we actually experience that qualia. It says we are saying we experience something because of the physical states that cause us to utter those words.


Ken G said it very well:
Ken G said:
For example, when he says "But if consciousness cannot affect brain states, it cannot play any part in producing our claims about it, and so it seems that we could not really know about consciousness", there would seem to be a magical step in his argument: the step where he connects having knowledge about consciousness to having consciousness.


According to the epiphenomenalists the physical states that cause the agent to utter certain words do NOT put the agent's qualia in their "equations", but this does not interfere the physical states to be able to define the word "consciousness" (associating it with certain neural firing).
 
  • #174
Ferris_bg said:
According to the epiphenomenalists the physical states that cause the agent to utter certain words do NOT put the agent's qualia in their "equations", but this does not interfere the physical states to be able to define the word "consciousness" (associating it with certain neural firing).
Yes, it would seem that one is welcome to explore both consciousness and free will from the perspective of its neural correlates, but doing so will only help to answer certain types of questions. The physicalist has the neat solution of simply discounting all other questions as irrelevant, but by doing so, they forfeit the ability to answer a host of issues that are clearly relevant to the human condition, issues like morality, ethics, aesthetics, and what is a life worth living. All irrelevant issues to the physicalist, who can merely look at causation, and only through that looking-glass, darkly.
 
  • #175
Q_Goest said:
Hi Pythagorean,

Not sure what the beginning of the universe has to do with. We don't have a theory of how the universe began, so let's not even consider it.

It has nothing to do with the beginning of the universe, just the general idea of a closed system vs. open system. If you can introduce a driving term to your model (such as a global electric field) even though you don't have a mechanistic model of how that driving term arose (you just know that you can measure it in the lab) then you've spoiled absolute reductionism (until you can provide a mechanistic description of the driving term, otherwise it's just some mathematical function you've added to make your system more descriptive.)

This goes for initial conditions too... we pick initial conditions as scientists, we don't have a theory for how they arose, so our system is always essentially open if it relies on initial conditions.

Regarding models and reality, it seems very confusing to me why you seem to feel that the philosophy of weak emergence is a "model" and not reality, but some sort of strong
emergence or downward causation is not the model but is reality itself.

Why do you keep saying this?

1) I've told you three times now (once in a previous thread) that I don't advocate strong emergence or strong downward causation. That's one of the many place where apeiron and I differ.

2) I've never made an attempt to say "this is the reality". Only that models are not reality, they approximate it, you can't prove a negative, etc, etc.

3) models that contain weak emergence are successful in describing reality. As I've said before, "strong" emergence seems like a wanting human interpretation.

please acknowledge that I've said this, since you've missed it three times before now.

Also, I can't help but wonder if your views of the dynamic systems approach are actually the same as those of the published work or not and if you really understand what the issue is.

I work as a dynamicist, for dynamicists, reading papers by other dynamicists (my most frequented journals are the journal of Chaos and Physics Review E).

For example, do you think molecules in a fluid (say Benard cells for example) are causally affected not only by their local interactions with other molecules and with gravity but also causally affected by what occurs in distant parts of the fluid? What limits how quickly the molecules of a fluid at one point can affect molecules at a distant point? And what do you think is meant by the commonly used phrase, "the whole is greater than the sum of the parts"?

Yes, they are affected by other parts of the fluid (in a turbulent regime, which is what most of nature is, even in biological systems). That is what the study of modern diffusion in turbulent transport is all about; fractalized diffusion terms (as opposed to classical diffusion). Anytime you have a fractalized derivative coupling your network components together, your behavior of one member no longer depends on just the nearest neighbors.

(by fractalized, I mean, as I said in a post above: the nth derivative is now the sth derivative. S is a real number, where n is integers, so you can have a 3.4th derivative). Using the fundamental theorem of calculus, derive the numerical form of a n=2 derivative, for instance. It will look something like V(m-1) + V(m+1) - 2(V(m)) (i.e. it will only depend on nearest neighbors, the m-1th cell and the m+1th cell). This is not the case if n is not an integer.

See the Hurst Exponent. When H = .5, you have classical diffusion.

We have one of the leading complexity experts working on this problem (fractal diffusion) at our university right now. He works on SOC and complexity in turbulent plasmas. My advisor (his partner) works on complexity in biological systems.

and this:
What limits how quickly the molecules of a fluid at one point can affect molecules at a distant point?

depends entirely on the system your modeling and what aspects you're modeling. You can't model everything at once in complex systems.
 
Last edited:
  • #176
"whole greater than the sum of parts" to me means that (for instance) two people who take 2 hours to paint a house alone can paint it in :45 minutes (instead of 1 hour.. i.e. double the people doesn't mean half the time.. it actually makes more productivity because there's a synergistic effect). Energy and mass are still conserved (it still takes just as much paint, just as many paint strokes) but now the guy on the ladder doesn't have to climb up and down every time, since his partner doing the lower level can hand stuff to him.

In the same vein, as I double the number of neurons in my systems, the lifetime goes up exponentially (twice the neurons is four times the lifetime, for instance).
 
  • #177
SPECULATION:

it seems to me that the holistic quantity is information.

But since (in principle) we can convert information to energy:

http://www.nature.com/nphys/journal/v6/n12/full/nphys1821.html

It means that there would be something to work out between conservation of energy and information holism that's not immediately obvious.

So perhaps the Energy+Information+Mass balance in the universe must remain constant, but the information has two forms, just like energy: useless and useful. Useless information is entropy, and entropy must increases, but it is being compensated by a loss in useful information (information that acts, as in the demonstration above, to power the system).

Of course, from a human perspective, we're gaining useful information and compensating by putting useless information into the universe (entropy), but the universe as a whole continues to lose useful information, gaining entropy, diffusing towards heat death.
 
Last edited by a moderator:
  • #178
Q_Goest said:
Let’s take the lead you provided from Kim regarding mental states (M) and physical states (P). For the causal closure of the physical, there are physical events P that determine other physical events. The mental events M are supervenient on the physical states but they don’t cause physical states. What causes physical states, assuming the causal closure of the physical, are other physical states. So the hypothetical neuroscientist that knows everything there is to know about our nervous system, can tell you what physical state P2 will follow physical state P1 (or what set of potential physical states will follow P1 if there is some random nature to them). Mental states that are described as phenomenal states are therefore epiphenomenal on the physical state. The mental state doesn’t cause the physical state, the physical states are caused by other physical states.

Epiphenomenal however, means that not only do these mental states not cause physical states, they also don’t influence them. They don’t have any way of influencing or causing a change in a physical state. If mental states were being “measured” by the physical state, they would no longer be epiphenomenal, they would suddenly become part of the causal chain that created the following physical state, so epiphenomenal in this regard means they really have no influence whatsoever over any physical state. So the paradox is, how do we know these states exist? The only reason given is that there is a 1 to 1 relationship between P and M, but that means we aren’t saying that we experience qualia because we actually experience that qualia. It says we are saying we experience something because of the physical states that cause us to utter those words.

OK, so here we have a view of reality that ends up arguing for a paradox. Which is why the intelligent response is to go back to the beginning and work on a different view, not to spend the rest of your life telling everyone you meet, "but this is the truth". That would be the crackpot response, wouldn't it?

Now it seems pretty transparent where the problem lies. If you start out assuming a definite separation between physical states and mental states, then it is no surprise that this is also the conclusion you end up with. And more subtly, you are even presuming something in claiming "states".

So let's start over. First we have to drop the idea of states because that already hardwires in a reductionist perspective. A state is something with spatial extent, but not a temporal extent. It is a term that already precludes change. It is the synchronic view of "everything that is the case at this moment"

A systems view is one that specifically includes time, change, potential, development. So of course if you analyse reality in terms of states, you cannot take a systems view of reality. You have not proved that the systems view fails, just that you did not understand what the systems view was.

The systems view equivalent of the notion of "state" would be an "equilbrium". That is a state where there is change that does not make a change. So you have an extended spatiotemporal view that is completely dynamic, but also at rest in some useful sense.

So your arguments cannot just hinge on a naive notion of a state here. That is the first point.

Second, P "states" are tales of material causality. And yes we expect the tales to be closed. This is a now standard physicalist presumption, and it works very well. So I am happy to take it as my presumption too.

I then, as said, make a distinction between varieties of physicalism.

There is the familiar reductionist physicalism of atomism - reality is constructed bottom-up from a collection of immutable parts. Though as also argued, reductionism does smuggle in global constraints as its immutable physical laws, and other necessary ingredients, such as entropic gradients, a spacetime void, etc.

Let's call this variety of physicalism Pr (because giving things this kind of pseudo-mathematical terminology seem more impressive).

Then there is the second systems model of physicalism - let's call it Ps. This, following Aristotle and many other systems thinkers, recognises holism. Realities are also made of their global constraints which act downwards to shape the identity and properties of their parts (by restricting their local degrees of freedom).

And as said, because even Pr smuggles the notion of global constraints into its simpler ontology, we can say {Ps {Pr}}. Reductionism is formally a subset of holism. It is holism where the top-down contraints have become frozen and unchanging, leaving only the localised play of atoms, or efficient causes.

You personally may disagree that Ps is a valid model of material causality, but you have yet to make any proper argument against it (I don't think you actually even understand it enough).

So on to M states. Again, you have to recognise the extra constraints implied by the very word "state". Consciousness has a rich temporal structure (we know this experimentally, Libet is part of the evidence). So it is not legitimate to hardwire your conclusions into your premises by presuming "M states" as an ontic category.

We must thus step back to the general metaphysical dichotomy of physical and mental (matter~mind). What do the terms properly denote?

We have already agree (I think) that P is a closed tale of material causes. And it can be seen that we are also presuming that it is an "objective" view. It is somehow what "actually exists out there", even though being good philosophers, we have long come to realize the map is not the territory and we are in fact only modelling the world. So it is what Nozick rightly calls the maximally invariant view - the supposed god's eye "view from nowhere".

So physicalism actually embeds further presumptions. It acknowledges its roots in subjectivity and becomes thus an epistemological device. It says this is how we model in a certain way.

The "material world of closed causality" - either Pr or Ps - is not actually the ontological view, just a view of ontology! P implies M. Or {M{P}}. Or indeed {M{Ps{Pr}}}

Now what in turn is properly denoted by "mental". Well it starts as everything that is so far as we are concerned. That is all there is really, as the only way we know anything is through being a mind.

But when used as part of a metaphysical dichotomy, the idea of mental, as opposed to physical, is trying after some more constrained meaning. It is trying to get at something which stands in contrast to our idea of the physical. So what? And what legitimately?

One of the obvious distinctions is between the observed and the observer, the interpreted and the interpreter, the modeled and the modeller. The very existence of a "done to" implies also the existence of a "doer". So there is a mind acting, and then the physical world it is acting upon.

And clearly a causal relationship is being suggested here, an interaction. I do the modelling and the world gets modeled. But I can also see the world is driving my modelling because of what happens when I wrongly model it.

So the everyday notion of the mental is about this contrast, and one that is still plainly causal. A connection is presumed as quite natural. So far the dichotomy seems natural, legitimate, and not paradoxical.

But then along come the philosophers who want to push the distinction further - to talk about res cogitans and res extensa, about qualia, about Ding an sich.

What were complementary aspects of an equilbrium seeking process (a systems view of the mind as a pragmatic distinction between the observers and the observed) suddenly becomes treated as different fundamental categories of nature. The distinction becomes reified so that there is the P and the M as axiomatically disconnected realms - where now a connection has to be forged as a further step, and not being able to do so becomes treated as a metaphysical paradox.

So yes, P~M has a social history as an idea. And the assumptions made along the way have got buried.

The "mental" properly refers to the fact that reality can become complexly divided into actors and their actions, models and the modeled, the subjective experience that is our everything and the objective stance that is our attempt to imagine an invariant, god's eye, view of "everything" (which is actually a view constructed of general theories - or formalised descriptions of global constraints - and the predictions/measurements that animate these theories, giving them their locally-driven dynamics).

So P here becomes a judgement of the degree of success we feel in modelling reality in terms of fundamental theories - theories describing reality's global constraints. And Ps is a more complete approach to modelling than Pr, but Pr is also the simpler and easier to use.

M is then epiphenomenal in the sense it is all that is not then part of this model - and so it stands for the modeller. It is not epiphenomenal by necessity - everything is actually just subjective experience in the end. But it is epiphenomenal by choice. We put the M outside the P so as to make the P as simple as possible. It is a pragmatic action on our part.

Now Pr quite clearly puts M way outside because it does away with observers, modellers, and other varieties of global constraint (as explicit actors in the dynamics being modeled). So Pr becomes a very poor vehicle for the pragmatic modelling of "mind" - of systems which in particular have non-holonomic constraints and so have active and adaptive top-down control over their moment-to-moment "mental states".

But with Ps, you can start to write formal models of observers and the observed. You can't model "the whole of M" as even Ps remains within M. This is the irreducible part of the deal. Nothing could invert the relationship so far a M is concerned. Yet within M we can have the Ps-based models of observer~observed relationships. And indeed I've referred frequently to the work of Friston (Bayesian brains), Rosen (modelling relations), Pattee (epistemic cut), as examples of such systems-based modelling.

So M - Ps = M'. We can explain away a lot via physicalist modelling, yet there will still be a final residue. But it is not the M that is epiphenomenal to the P. Rather the other way round. The mind does not have to have models based on physicalist notions of closed systems of entailment to exist. It existed already. And it created the P that claims to exist as causally isolated from the subjective wishes, whims and desires of the M.
 
Last edited:
  • #179
Hi Q_Goest,

Q_Goest said:
Let’s take the lead you provided from Kim regarding mental states (M) and physical states (P). For the causal closure of the physical, there are physical events P that determine other physical events. The mental events M are supervenient on the physical states but they don’t cause physical states. What causes physical states, assuming the causal closure of the physical, are other physical states. So the hypothetical neuroscientist that knows everything there is to know about our nervous system, can tell you what physical state P2 will follow physical state P1 (or what set of potential physical states will follow P1 if there is some random nature to them). Mental states that are described as phenomenal states are therefore epiphenomenal on the physical state. The mental state doesn’t cause the physical state, the physical states are caused by other physical states.

apeiron said:
If you start out assuming a definite separation between physical states and mental states, then it is no surprise that this is also the conclusion you end up with. And more subtly, you are even presuming something in claiming "states".

Maybe another way to express apeiron's comment is the following:
Please give a look again at http://en.wikipedia.org/wiki/Gun_(cellular_automaton)" or would you say it doesn't, because all states just follows from a previous state?

I guess you would say there's a causal link despite this link can also be described in terms of P1 to P2 transitions. Of course we don't know if free will follows the same trick, but don't you think this analogy still demonstrate the lack of logical impossibility between determinism and free will?
 
Last edited by a moderator:
  • #180
Hi apeiron. Thanks very much for trying to explain. Seriously. I appreciate you attempting to thoughtfully and carefully bring out those views that you feel are pertinent to this portion of the discussion. I’d like to better understand your views, so I appreciate you taking the time to try and explain them. Perhaps you could start a thread that highlighted those views and provide references so I can dig deeper. That said, I honestly can’t make heads or tails of your post. Take this for instance.
A state is something with spatial extent, but not a temporal extent. It is a term that already precludes change.
I don’t see how physical states preclude change. Physical states exist in both space and time. In http://en.wikipedia.org/wiki/Phase_space#Thermodynamics_and_statistical_mechanics" for example, if a system consists of N particles, then a point in the 6N-dimensional phase space describes the dynamical state of every particle in that system, as each particle is associated with three position variables and three momentum variables. In this sense, a point in phase space is said to be a microstate* of the system. Physical states require both dimensional and temporal information to describe them, so I don’t know why one would claim that physical states don’t have a “temporal extent”. I don’t know what that means.

I’d like to honestly understand why there are people that feel the nonlinear approach (dynamics approach, systems approach, etc…) such as Alwyn Scott, Even Thompson, many others, is so appealing, but these are not mainstream ideas. Would you not agree? The mainstream ideas surrounding how consciousness emerges regards computationalism which doesn’t seem to fit with this other approach. From where I sit, weak emergence and separability of classical systems are well founded, mainstream ideas that are far from being overturned. They are used daily by neuroscientists that take neurons out and put them in Petri dishes and subject them to controlled experiments as if they were still in vivo. Then they compare this reductionist experiment with the brain and with computer models which are clearly only weakly emergent. So what is it that is really being promoted by this systems approach?

*The microstates of the system are what Bedau is referring to when he defines weak emergence.

Lievo said:
Please give a look again at http://en.wikipedia.org/wiki/Gun_(cellular_automaton)" or would you say it doesn't, because all states just follows from a previous state?

I guess you would say there's a causal link despite this link can also be described in terms of P1 to P2 transitions. Of course we don't know if free will follows the same trick, but don't you think this analogy still demonstrate the lack of logical impossibility between determinism and free will?
It is perfectly acceptable in laymen’s terms, to say that the gun caused the emission of the spaceship. I talk to other engineers about how a valve causes a pressure drop in a fluid flowing through it. Certainly the valve has no 'free will' just as the gun in the Game of Life (GoL) has free will to create a spaceship. The point being that laymen terms are not applicable to what is the ‘efficient’ cause. Yes, causes can go right down to some most primitive particle, and hence it is the desire of some physicists to find a “theory of everything”. One can say the gun caused the spaceship but as weak emergence would have it, the ability to emit a spaceship is dependant on the ability of individual cells in the game of life to change state from white to black and back again which is a function of the rules to the game, just as lower level physical laws are the ‘rules’ by which higher level phenomena appear. Even classical mechanics is taken to be 'emergent' on the interactions of many molecules, atoms or particles just as the gun and spaceship in the GoL emerge from the interactions of the individual cells. However, separability breaks down at the level of molecular interactions and below. Somewhere between the classical scale and the QM scale, there must be a change in the basic philosophy of how to treat nature, and how to treat causation.


Oh... and by the way. I (think I) agree with both of you regarding the fundamental issue that the knowledge paradox seems to flounder on. The problem starts out by using as an axiom that phenomenal states are not physically describable. They are not physical states. Once you define qualia that way, you may as well go whole hog and admit that the causal closure of the physical is false. These two axioms are at odds which is why there is a paradox.
 
Last edited by a moderator:
  • #181
Q_Goest said:
I don’t see how physical states preclude change. Physical states exist in both space and time. In http://en.wikipedia.org/wiki/Phase_space#Thermodynamics_and_statistical_mechanics" for example, if a system consists of N particles, then a point in the 6N-dimensional phase space describes the dynamical state of every particle in that system, as each particle is associated with three position variables and three momentum variables. In this sense, a point in phase space is said to be a microstate* of the system. Physical states require both dimensional and temporal information to describe them, so I don’t know why one would claim that physical states don’t have a “temporal extent”. I don’t know what that means.

So you would disagree with the Wiki definition of states in classical physics as " a complete description of a system in terms of parameters such as positions and momentums at a particular moment in time"?
http://en.wikipedia.org/wiki/State_(physics )

All you are saying in pointing out that 6 dimensions can capture "all" the dynamics of a particle is that this is the way dynamics can be modeled in reductionist terms. You can freeze the global aspects (the ones that would be expressed in time) and describe a system in terms of local measurements.

Yet we know from QM that the position and momentum cannot be pinned down with this arbitrary precision - this seems a very strong ontological truth, no? And we know from chaos modelling that a failure to be able to determine initial conditions means that we cannot actually construct the future of a collective system from a 6N description. And we know from thermodynamics that we cannot predict the global attractor that will emerge in such a 6N phase space even if we did have exactly measured initial conditions - the shape of the attractor can only emerge as a product of a simulation. Etc, etc.

So we know for many reasons that a state based description of reality is a reduced and partial model good for only a limited domain of modelling. To then use it as the unexamined basis of philosophical argument is a huge mistake. Even if as you argue, it is a "mainstream" mistake.

You probably still don't understand why states are the local/spatial description and exclude global temporal development. But "at a particular moment in time" seems a pretty clear statement to me. It is the synchronic rather than diachronic view. Surely you are familiar with the difference?

I’d like to honestly understand why there are people that feel the nonlinear approach (dynamics approach, systems approach, etc…) such as Alwyn Scott, Even Thompson, many others, is so appealing, but these are not mainstream ideas. Would you not agree? The mainstream ideas surrounding how consciousness emerges regards computationalism which doesn’t seem to fit with this other approach. From where I sit, weak emergence and separability of classical systems are well founded, mainstream ideas that are far from being overturned. They are used daily by neuroscientists that take neurons out and put them in Petri dishes and subject them to controlled experiments as if they were still in vivo. Then they compare this reductionist experiment with the brain and with computer models which are clearly only weakly emergent. So what is it that is really being promoted by this systems approach?

Perhaps your area of expertise is computer science and so yes, this would not be the mainstream view in your world. But I am not sure that you can speak for neuroscience here. In fact I know you can't.

I have repeatedly challenged you on actual neuroscience modelling of the brain, trying to direct your attention to its mainstream thinking - the effects on selective attention on neural receptive fields has been one of the hottest areas of research for the last 20 years. But you keep ducking that challenge and keep trying to find isolated neuron studies that look comfortably reductionist to you.

You just don't get the irony. Within neuroscience, that was the big revolution of the past 20 years. To study the brain, and even neurons and synapses, in an ecologically valid way. Even the NCC hunt of consciousness studies and the brain imaging "revolution" was based on this.

People said we have been studying the brain by isolating the components. And it has not really told us what we want to know. We stuck electrodes into the brains of cats and rats. But they were anaethetised, not even conscious. And it was single electrodes, not electrode arrays. But now (around 20 years ago) we have better equipment. We can record from awake animals doing actual cognitive tasks and sample activity from an array of regions. Even better, we can stick humans in a scanner and record the systems level interactions.

Yet you say the mainstream for neuroscience is people checking the electrical reponses of disected neurons in petri dishes, or IBM simulations (gee, you don't think IBM is just about self-promotion of its supercomputers here?).

I used to write for Lancet Neurology, so I think I have a better idea of what is mainstream in neuroscience.

Again, remember that my claim here is not that reductionism (the computer science view of life) is wrong. Just that it is the subset of the systems view you arrive at when you freeze out the issue of global constraints. It is the adibiatic view. Where the larger Ps model also has to be able to deal with the non-adibiatic story - where global constraints actually develop, evolve, change, in time.
 
Last edited by a moderator:
  • #182
Q_Goest said:
Oh... and by the way. I (think I) agree with both of you regarding the fundamental issue that the knowledge paradox seems to flounder on. The problem starts out by using as an axiom that phenomenal states are not physically describable. They are not physical states. Once you define qualia that way, you may as well go whole hog and admit that the causal closure of the physical is false. These two axioms are at odds which is why there is a paradox.

Or instead, you could recognise that you had made a wrong move in assuming P and M to be ontologically separate (rather than epistemically separable - big difference).

The axiom that P and M are separate can be false, and yet the axiom that P is close is true.

And you again seem to be missing the point that axioms are epistemological assertions of modelling convenience rather than statements of ontological truth. They are "truths" that seem reasonably in the grounds of generalised experience rather than truths that are known to be true due to some magical kind of direct revelation.
 
  • #183
Q_Goest said:
Perhaps you could start a thread that highlighted those views and provide references so I can dig deeper.
You're asking apeiron to provide references to support his claims... dude you like to live with risk! :biggrin:

Q_Goest said:
The point being that laymen terms are not applicable to what is the ‘efficient’ cause.
I don't think I get your point here. What is the difference you see between layman causality and efficient causality?

Q_Goest said:
One can say the gun caused the spaceship but as weak emergence would have it, the ability to emit a spaceship is dependant on the ability of individual cells in the game of life to change state from white to black and back again which is a function of the rules to the game
Or one can say that the gun is an algorithm, which indeed it is (and a simple one: it's just a periodic oscillator), thus the behavior does not need to tied with the particular rule of CGL: any system mathematically equivalent to these guns is in deep the same system. So if we afford to say that the gun cause the spaceship, then the causality is in fact the identity to a Turing machine. That said, a non trivial consequence is that free will is a set of algorithm, define as those who can behave as something we will recognize has having free will. (...) It's late I'm becoming unclear I guess. See you. :smile:
 
Last edited:
  • #184
  • #185
Hi apeiron. I found that last post to be totally understandable. Not like the previous post at all. I really wonder how you manage to switch the flowery talk on and off like that. No offense intended.

apeiron said:
Yet we know from QM that the position and momentum cannot be pinned down with this arbitrary precision - this seems a very strong ontological truth, no?
But pinning down particles isn't important to classical mechanics. Sure, the real world isn't classical, but that's not the point. The conventional view is that quantum mechanics isn't a factor in how the brain works because there are sufficient statistical aggregates of particles that individual particles don't matter. They're simply averaged together. Is this "systems view" dependent on individual particles? Clearly Alwyn Scott for example, makes the point as do many others that classical mechanics has this 'more than the sum' feature already intrinsic to it and I’d think Scott’s views probably mirror your ideas fairly closely.

And we know from chaos modelling that a failure to be able to determine initial conditions means that we cannot actually construct the future of a collective system from a 6N description.
But is that because of not having the initial conditions of the individual particles? Or not having the initial conditions of the classically defined states? Density, internal energy, entropy, etc... of the system is obviously independent of specific individual particle states, but not of the aggregate. So not knowing individual particle states will lead to indeterminate future states, and yes, the classical model is a model. But one has to show that there is a "meaningful difference" between having initial particle conditions and having initial classical conditions. I see no meaningful difference. Sure the classical approach isn't exact, but it is exact to the degree you have the initial conditions of the classical states and that's what's important unless the quantum mechanical states are being roped into causing different classical states by downward causation which isn't possible.

And we know from thermodynamics that we cannot predict the global attractor that will emerge in such a 6N phase space even if we did have exactly measured initial conditions - the shape of the attractor can only emerge as a product of a simulation. Etc, etc.
Can you provide an example of a global attractor? One that regards classical mechanics and can't be predicted by weak emergence? This is fundamentally where we disagree. I would say Benard cells are a perfect example of a weakly emergent structure, and I’d contest that’s a mainstream idea not just my own. Regarding mainstream, perhaps that feels like a knock so instead I’ll say “cutting edge” or something. Anyway, as Davies points out.
Thus we are told that in Benard instability, … the molecules organize themselves into an elaborate and orderly pattern of flow, which may extend over macroscopic dimensions, even though individual molecules merely push and pull on their near neighbors. This carries the hint that there is a sort of choreographer, an emergent demond, marshalling the molecules into a coherent, cooperative dance, the better to fulfil the global project of convective flow. Naturally this is absurd. The onset of convection certainly represents novel emergent behavior, but the normal inter-molecular forces are not in competition with, or over-ridden by, novel global forces. The global system ‘harnesses’ the local forces, but at no stage is there a need for an extra type of force to act on an individual molecule to make it comply with a ‘convective master plan’.
Also from Davies
Strong emergence cannot succeed in systems that are causally closed at the microscopic level, because there is no room for additional principals to operate that are not already implicit in the lower-level rules.
However Davies does allow for some kind of emergence at the border between classical and quantum mechanics, which is where separability breaks down also.

You probably still don't understand why states are the local/spatial description and exclude global temporal development. But "at a particular moment in time" seems a pretty clear statement to me. It is the synchronic rather than diachronic view. Surely you are familiar with the difference?
No, I'm not. Feel free... Regardless, why should it be controversial to suggest that there exists a physical reality at a particular moment in time? If the argument drops into quantum mechanics, there’s no point in arguing. At that point, we have to suggest that neurons interact due to some quantum mechanical interaction, which isn’t worth arguing about.

Perhaps your area of expertise is computer science and so yes, this would not be the mainstream view in your world. But I am not sure that you can speak for neuroscience here. In fact I know you can't.
I guess I disagree.

I have repeatedly challenged you on actual neuroscience modelling of the brain, trying to direct your attention to its mainstream thinking - the effects on selective attention on neural receptive fields has been one of the hottest areas of research for the last 20 years. But you keep ducking that challenge and keep trying to find isolated neuron studies that look comfortably reductionist to you.
I think you misunderstand. My daughter has selective attention. I have no doubt they influence neural receptive fields! lol But that’s like saying the spaceship is caused by the gun that Lievo keeps talking about. Unless you can clearly define selective attention and neural receptive fields, it won’t help.

You just don't get the irony. Within neuroscience, that was the big revolution of the past 20 years. To study the brain, and even neurons and synapses, in an ecologically valid way. Even the NCC hunt of consciousness studies and the brain imaging "revolution" was based on this.

People said we have been studying the brain by isolating the components. And it has not really told us what we want to know. We stuck electrodes into the brains of cats and rats. But they were anaethetised, not even conscious. And it was single electrodes, not electrode arrays. But now (around 20 years ago) we have better equipment. We can record from awake animals doing actual cognitive tasks and sample activity from an array of regions. Even better, we can stick humans in a scanner and record the systems level interactions.

Yet you say the mainstream for neuroscience is people checking the electrical reponses of disected neurons in petri dishes, or IBM simulations (gee, you don't think IBM is just about self-promotion of its supercomputers here?).
I don’t disagree with anything you said here except that I’m sure IBM isn’t somehow influencing neuroscience with profits from their computers. Sure, we’ve made progress in understanding how brains work, just as you say. That’s all the kind of work that’s necessary for the reductionist approach. I suspect you intend to mean that all this experimentation is unique somehow to a systems approach, but I don’t see why.
 
  • #186
apeiron said:
I supplied Q Goest with the references he requested long ago. Perhaps he did not read them? That often happens here doesn't it :wink:?

https://www.physicsforums.com/showpost.php?p=2501587&postcount=7
Not sure why you brought up that link.
You quoted Bedau who only accepts weak emergence as I've pointed out.
You quoted Emmeche who rejects strong downward causation and his medium downward causation is frighteningly like weak downward causation. It either has to drop into one or the other category, which isn't clear.
You quoted yourself on Physicsforums
You quoted Google
And this one: http://www.calresco.org/
and a handful of other web sites. I don't want to read web sites though except perhaps the Stanford Dictionary of Philosophy or maybe Wikipedia.

I guess we should just disagree and leave it at that.
 
Last edited by a moderator:
  • #187
Lievo said:
I don't think I get your point here. Why do you agree to attribute gun with freedom but not valves,
I'm not attributing the gun with freedom (free will). I'm saying it has just as much as the valve, which is none.

Lievo said:
and what is the difference you see between layman causality and efficient causality?
Layman causality is not efficient causality.
 
  • #188
apeiron said:
I supplied Q Goest with the references he requested long ago. Perhaps he did not read them? That often happens here doesn't it :wink:?
Perhaps he had a look and the references were not supporting your claims unless a lot of creativity was involved. That sometime happens, doesn't it :wink:?
 
Last edited:
  • #189
Q_Goest said:
Layman causality is not efficient causality.
That what you said and I perfectly understood that it was what you said. My question again: please explain why you think there is a difference.
 
Last edited:
  • #190
Q_Goest said:
I’d like to honestly understand why there are people that feel the nonlinear approach (dynamics approach, systems approach, etc…) such as Alwyn Scott, Even Thompson, many others, is so appealing, but these are not mainstream ideas. Would you not agree? The mainstream ideas surrounding how consciousness emerges regards computationalism which doesn’t seem to fit with this other approach. From where I sit, weak emergence and separability of classical systems are well founded, mainstream ideas that are far from being overturned.

This seems like a contradiction to me, nonlinear approach, systems approach. i.e. complex systems approach isn't at odds with computationalism (and is indeed part of it). Any time you read about spiking networks, you're reading about physiologically derived neuron models that are inherently nonlinear and give rise to chaos when coupled together with biologically derived coupling terms (excitatory, inhibitory, diffusive). Since there are several operating in a network, you have a complex system. Yes, this is mainstream (but in a nascent manner).

Evidence that it's mainstream:

Journals

Physics Review E now includes biological systems:
http://pre.aps.org/
neuroscience in that journal:
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2713719/

AIP: Chaos: Journal of Interdisciplinary Science (you should actually read this:)
http://chaos.aip.org/about/about_the_journal

A neuroscience example from Chaos:
http://chaos.aip.org/resource/1/chaoeh/v18/i2/p023102_s1?isAuthorized=no

Neuroscience:
http://www.ncbi.nlm.nih.gov/pubmed/10362290

Chemical Physics and Physical Chemistry:
http://onlinelibrary.wiley.com/doi/10.1002/cphc.200500499/fullPublished Authors:

Wulfram Gerstner (head of Computational Neuroscience department at the Mind-Brain Institute in Laussane, Switzerland)
http://icwww.epfl.ch/~gerstner//BUCH.html

Ermentrout:
http://www.pitt.edu/~phase/

Eugene Izhikevich's:
http://www.braincorporation.com/

Izhikevich wrote the textbook, "Dynamical Systems in the Neurosciences"
http://www.izhikevich.org/publications/dsn.pdf

There's Tsumoto:
http://pegasus.medsci.tokushima-u.ac.jp/~tsumoto/achieve/index-e.htmlIt's the complex global behavior that is unpredictable (which is why we have to run simulations, then interpret them geometrically: i.e. pretty pictures that we interpret from our personal experiences and what we know experimentally about the neurons). There's no "deterministic solution" to resulting complex behavior. Yet, it still can be quantified with average lifetimes, finding the different qualitative regimes through bifurcation analysis, etc, etc. You can go back and look at a bunch of numbers that the code spits out, but they're not very meaningful in the standard mathematical analysis. We need Poincare's geometric approach.
 
Last edited by a moderator:
  • #191
Q_Goest said:
Hi apeiron. I found that last post to be totally understandable. Not like the previous post at all. I really wonder how you manage to switch the flowery talk on and off like that. No offense intended.

Do you have examples of this flowery language? Systems science being an interdisciplinary subject, there is a problem that there are a lot of different jargons - ways of saying the same things coming from different fields.

But pinning down particles isn't important to classical mechanics. Sure, the real world isn't classical, but that's not the point. The conventional view is that quantum mechanics isn't a factor in how the brain works because there are sufficient statistical aggregates of particles that individual particles don't matter. They're simply averaged together. Is this "systems view" dependent on individual particles? Clearly Alwyn Scott for example, makes the point as do many others that classical mechanics has this 'more than the sum' feature already intrinsic to it and I’d think Scott’s views probably mirror your ideas fairly closely.

QM shows there is a problem even at the fundamental physical level. And the systems argument is that the same thing is going on over all scales of analysis.

Let me remind you what the systems approach actually is.

It says systems are formed hierarchically from two complementary kinds of causality - bottom-up construction and top-down constraint. And they are "a system" in that they are mutual or synergistic in their causality. Each is making the other. And so the develop (emerge) together in holistic fashion.

You are arguing from the reductionist viewpoint where there is a definite atomistic grain to reality. You have a bunch of stuff that already exists (it is not emergent) and it constructs some kind of global order (the forms emerge from the materials).

The reductionist story cannot of course address the reasons why the local atomistic grain exists. And while a "global state" might emerge, this is not the same as the global constraints of a system.

The temperature or pressure of an ideal gas is a macroscopic measurement, not a constraint. The constraints of the ideal gas would be the walls of the container, the external bath that keeps the system at a constant equilibrium, etc. All the general conditions that allow the system to have "a condition".

The systems approach instead says all that exists locally are degrees of freedom. Well, exist is too strong a word for random fluctuations - an unlimited number of degrees of freedom. So there is nothing definite at the scale of the local grain at the beginning.

But then unconstrained fluctuations lead to randomly occurring interactions and at some point constraints start to form as a result. Constraints have the effect of limiting local degrees of freedom. There is suddenly the beginnings of less freedom in the fluctuations and so a more definite direction to their action. The global constraints in turn become more definite, feeding back again on the local freedoms and the whole system undergoes a phase transition to a more ordered state.

This is a model familiar from many fields - Peircean semiotics, spin glasses, generative neural nets, Haken's synegetics, second order cybernetics, Hofstadter's strange loops, Salthe's hierarchy theory, Ulanowicz's ecological ascendancy, etc.

The key point is that there is now no definite local atomistic grain. It is part of what must emerge. It is like solitons or standing waves. The "particles" composing the system are locally emergent features that exist because of global constraints on local freedoms.

So QM says there are no locally definite particles until there is an act of observation - a decoherence of the wavefunction by an observing world.

And the same at the neuroscience level. A neuron's receptive field is constrained first by a developmental history (learning over time) and then even by attentional effects (top-down shaping over the course of 100 ms or so).

So reductionism gives you the frozen, static, simple description of a system. The local grain just exists (whereas in the systems view it is shaped by constraint). And the global constraints are unchanging (whereas in the systems view, they are part of what has to self-organise).

Consider the parallels with genetic algorithms. A pool of code (representing many local degrees of freedom) is forced to self-organised by imposing some general global constraints. A program is evolved. (This is not exactly what I am talking about, just an illustration that might be familiar to you.)

As to Scott, it is a long time since I spoke to him or read his book, so I can't actually remember how close he is to my current views. Though from dim memory, I think he was a little more simplistic - like John Holland and others pushing the Santa Fe view of complexity that was vogue in the US at that time. (Gell-Mann and Kauffman were deeper IMO)

But is that because of not having the initial conditions of the individual particles? Or not having the initial conditions of the classically defined states? Density, internal energy, entropy, etc... of the system is obviously independent of specific individual particle states, but not of the aggregate. So not knowing individual particle states will lead to indeterminate future states, and yes, the classical model is a model. But one has to show that there is a "meaningful difference" between having initial particle conditions and having initial classical conditions. I see no meaningful difference. Sure the classical approach isn't exact, but it is exact to the degree you have the initial conditions of the classical states and that's what's important unless the quantum mechanical states are being roped into causing different classical states by downward causation which isn't possible.

If everything is nicely "linear" - static and unchanging at the atomistic level due to static and unchanging global constraints - then coarse-graining can be good enough for modelling.

But the discussion was about systems that are complex and developing - such as brains. Where the global constraints are, precisely, non-holonomic. Where the local grain (for example, neural receptive fields) are dynamically responsive.

Taking an ideal gas as again a standard classical physics model of a system, note that we can impose a temperature on the system, but we cannot determine the individual kinetic freedoms of the particles. Just constrain them to a gaussian distribution.

So this is coarse graining in action. The individual motions are unknown and indeed unknowable (Maxwell's Demon). But they are constrained to a single statistical scale - that of the system's now atomistic microstates. And we can calculate on that basis. We have frozen out the sources of uncertainty (either local or global) so far as our modelling is concerned.

Can you provide an example of a global attractor? One that regards classical mechanics and can't be predicted by weak emergence? This is fundamentally where we disagree. I would say Benard cells are a perfect example of a weakly emergent structure, and I’d contest that’s a mainstream idea not just my own. Regarding mainstream, perhaps that feels like a knock so instead I’ll say “cutting edge” or something. Anyway, as Davies points out.

I gave you the Collier reference on Benard cells. Can you instead supply me with a reference where the global organisation was predicted purely from a local model of the molecules thrown together?

...I'll answer the rest of your points later...
 
  • #192
Lievo said:
Perhaps he had a look and the references were not supporting your claims unless a lot of creativity was involved. That sometime happens, doesn't it :wink:?

Sometimes people say that is what happened - yet strangely cannot then support their opinion in public. At least Q Goest argues his corner. If you were more confident of your views, perhaps you would too?
 
  • #193
Q_Goest said:
Also from Davies

However Davies does allow for some kind of emergence at the border between classical and quantum mechanics, which is where separability breaks down also.

Davies correctly says that global constraints are not some extra force. Force is a localised, atomised, action - efficient causality. Constraints are just constraints. They might sound "forceful" because they act downwards to constrain the local degrees of freedom (as I say, shape them to have some distinct identity). But they are a complementary form of causality. A constraining or limiting action, not a constructive or additive one.

He is also right in saying that you cannot have constraints emerging if the analysis only recognises the micro-scale. If you close off causality at the microscale in your modelling, there is indeed no room for anything else.

No, I'm not. Feel free... Regardless, why should it be controversial to suggest that there exists a physical reality at a particular moment in time? If the argument drops into quantum mechanics, there’s no point in arguing. At that point, we have to suggest that neurons interact due to some quantum mechanical interaction, which isn’t worth arguing about.

The synchronic~diachronic dichotomy is used frequently in the emergence literature - as in the Bedau paper you appear to prefer.

http://people.reed.edu/~mab/papers/principia.pdf

This is not about QM issues but the general modelling of structures and processes - systems modelling.

I think you misunderstand. My daughter has selective attention. I have no doubt they influence neural receptive fields! lol But that’s like saying the spaceship is caused by the gun that Lievo keeps talking about. Unless you can clearly define selective attention and neural receptive fields, it won’t help.

What is undefined about the concepts of selective attention and neural receptive fields in the literature?

And Lievo's guns are precisely not an example of anything I have been talking about. What could be a more reductionist view of reality than CA? He may see "spaceships" and "guns" looking at the rigid operations of a finite state automaton. But that "meaning" is completely absent from the CA itself. It emerges nowhere within it.

Bedau uses these guns to argue for weak emergence. And I agree, so far as emergence goes, it is as weak as can be imagined. I am talking about something else here.

I don’t disagree with anything you said here except that I’m sure IBM isn’t somehow influencing neuroscience with profits from their computers.

I was saying IBM dreams up these kinds of stunts to sell more supercomputers to universities.

Perhaps you haven't been keeping track of the controversies? IBM getting savaged by its own original Blue Brain scientist.

Why did IBM let Mohda make such a deceptive claim to the public?
I don't know. Perhaps this is a publicity stunt to promote their supercompter. The supercomputer industry is suffering from the financial crisis and they probably are desperate to boost their sales. It is so disappointing to see this truly great company allow the deception of the public on such a grand scale.

http://nextbigfuture.com/2009/11/henry-markram-calls-ibm-cat-scale-brain.html
 
  • #194
from apeiron's link (criticizing IBM):

In real life, each segment of the branches of a neuron contains dozens of ion channels that powerfully controls the information processing in a neuron. They have none of that. Neurons contain 10's of thousands of proteins that form a network with 10's of millions of interactions. These interactions are incredibly complex and will require solving millions of differential equations.

This is exactly right, it's not an easy problem. My Textbook:

"From Molecules to Networks: and introduction to cellular and molecular neuroscience"

It is no exaggeration to say that the tasks of understanding how intrinsic activity, synaptic potentials, and active potentials spread through and are integrated within the complex geometry of the dendritic trees to produce the input-output operations of the neurons is one of the main frontiers of neuroscience.

But we do have NEURON and GENESIS to help us with this, using the compartmental models developed by Rall and Shepherd. (Sheperd co-authored this chapters of the textbook).

I think what Q_Goest doesn't recognize is that passive currents (compartment models) only represent intrinsic currents, not the more interesting active currents (i.e. all the different channel dynamics. the feedback circuit between membrane potential and channel activation, the action potential.

Or the whole molecular/genetic behavior part that we've all continued to sweep aside. Neural activity also stimulates changes in genetic expression, so you have to talk about mRNA and transcription factors which modulate the geometry of the dendrites and the strength of both chemical and electrical synapses (by increasing and decreasing channel sites) so there's even more complicated feedback happening at the molecular level.

Yes, we use compartmental models. They're not nearly the whole story.
 
  • #195
Pythagorean said:
this is mainstream (but in a nascent manner).

Evidence that it's mainstream:

apeiron said:
the effects on selective attention on neural receptive fields has been one of the hottest areas of research for the last 20 years.

As a neuroscientist, I wish you both stop what is either poorly supported or wrong view about what is mainstream in neuroscience. Saying that this or this work is compatible with an interpretation is not an evidence that the interpretation is influencial (in other words I agree about the nascent manner but a nascent mainstream is simply self-contradictory). Saying that receptive field has been one of the hottest areas of research in the last 20 is simply wrong. Single units technics in animals have been the gold standard from maybe 1935 to about 1990. Since that what happens is an impressive rise of new technics devoted to recording in humans, mostly fMRI, and these technics can't record receptive field. You may not believe me, in which case simply look at how many papers you can retrieve with either brain + MRI or brain + receptive field in the last 20 y.

Q_Goest said:
However Davies does allow for some kind of emergence at the border between classical and quantum mechanics, which is where separability breaks down also.
Can you explain why separability should breaks down at this border, despite QM is perfectly computable?

Q_Goest said:
that’s like saying the spaceship is caused by the gun that Lievo keeps talking about. Unless you can clearly define selective attention and neural receptive fields, it won’t help.
These two concepts are perfectly and operationnaly defined. What's the problem?

apeiron said:
And Lievo's guns are precisely not an example of anything I have been talking about. What could be a more reductionist view of reality than CA? He may see "spaceships" and "guns" looking at the rigid operations of a finite state automaton. But that "meaning" is completely absent from the CA itself. It emerges nowhere within it.

Bedau uses these guns to argue for weak emergence. And I agree, so far as emergence goes, it is as weak as can be imagined. I am talking about something else here.
I'm sure you well aware that CGl is universal for computation. Meaning that any computable system you may think of can be implemented on it. So are you saying strong emergence is not computationnal? If not, on which basis would you decide that one CGL show or don't show strong emergence?
 
Last edited:
  • #196
We seem to be getting deeply into the neuroscience at this point, which is a perfectly appropriate place to go to study the neural correlates of mental qualia like free will. I would just like to point out at this point, as we have been talking about Pr, Ps, and M states, that we are free to adopt a physicalist perspective, and even choose to assume that physical states actually exist, and further assume that they form a closed system (whether it be Ps or Pr that are the most useful approaches, or whether either approach can be more or less useful in a given context). All the same, every single one of those is an assumption involved in a modeling approach, not a single one is an axiom (because they are not self-evident), and not a single one has convincing evidence to favor it. They are just choices made by the scientist to make progress.

In my view, which may be a minority but is logically bulletproof, there is no reason to imagine that we know that physical states either exist, or are closed, or even that it makes any sense to imagine that either of those are true beyond the usual idealizations we make to get somewhere. What is demonstrably true is that everything we mean by a physical state arises from perception/analysis of a class of experiences, all done by our brains. We can notice that our perceptions are correlated with the concepts we build up around the idea of a physical state, and we gain predictive power by building up those concepts, and not one single thing can we say beyond that. This is just something to bear in mind as we dive into the physicalist perspective, either at a reduced or systems level-- it is not at all obvious that this approach will ever be anything but a study of the neural correlates of mental states, i.e., the mental states may always be something different.
 
  • #197
Lievo said:
As a neuroscientist, I wish you both stop what is either poorly supported or wrong view about what is mainstream in neuroscience. Saying that this or this work is compatible with an interpretation is not an evidence that the interpretation is influencial (in other words I agree about the nascent manner but a nascent mainstream is simply self-contradictory).

Nascent mainstream is not self-contradictory. Nonlinear science is nascent to the mainstream, but it is mainstream (i.e. the work is published in well-known peer-reviewed journals). I don't know what you think my interpretation is; I responded to Q_Goest who (wrongly) put computational models and nonlinear science at odds.

You would agree, I hope, that Hodgkin's Huxley is a (~60 year old) mainstream model. It's a nonlinear model. Popular among computaitonal scientists is the Morris Lecar model (because it's 2D instead of the HH 4D, making phaseplane analysis and large networks much easier to handle).

Theoretical Neuroscience institutes and centers have popped up all over the world in the last 20 years, including exactly the kind of work I'm talking about (you have Redwood at Berkeley, CTN in New York, The Seung Lab at MIT, Computaitonal Neuroscience programs within neuroscience deparments)

We can argue about the semantics of "mainstream" but this a well-funded, productive area of research.

Here... from 2001, TEN years ago...

Neurodynamics: nonlinear dynamics and neurobiology
Current Opinion in Neurobiology
Volume 11, Issue 4, 1 August 2001, Pages 423-430
 
Last edited:
  • #198
Lievo said:
As a neuroscientist, I wish you both stop what is either poorly supported or wrong view about what is mainstream in neuroscience... You may not believe me, in which case simply look at how many papers you can retrieve with either brain + MRI or brain + receptive field in the last 20 y.

You lose credibility with every post.

Check the top 10 papers from Nature Neuroscience over the past 10 years.
http://www.stanford.edu/group/luolab/Pdfs/Luo_NatRevNeuro_10Y_anniv_2010.pdf

Attention effects, neural integration, homeostatic organisation and other forms of global top-down self-organisation feature prominently.

2001 Brainweb 2.0: the quest for synchrony
2002 Attention networks: past, present and future
2004 Homeostatic plasticity develops!
2006 Meeting of minds: the medial frontal cortex and social cognition

Your claim that scanning can't be used to research top-down effects is obvious nonsense.

http://www.jneurosci.org/content/28/40/10056.short
or
http://www.nature.com/neuro/journal/v3/n3/full/nn0300_284.html
or
http://www.indiana.edu/~lceiub/publications_files/Pessoa_Cog_Neurosci_III_2004.pdf

Or as wiki says...

In the 1990s, psychologists began using PET and later fMRI to image the brain in attentive tasks. Because of the highly expensive equipment that was generally only available in hospitals, psychologists sought for cooperation with neurologists. Pioneers of brain imaging studies of selective attention are psychologist Michael I. Posner (then already renowned for his seminal work on visual selective attention) and neurologist Marcus Raichle.[citation needed] Their results soon sparked interest from the entire neuroscience community in these psychological studies, which had until then focused on monkey brains. With the development of these technological innovations neuroscientists became interested in this type of research that combines sophisticated experimental paradigms from cognitive psychology with these new brain imaging techniques. Although the older technique of EEG had long been used to study the brain activity underlying selective attention by cognitive psychophysiologists, the ability of the newer techniques to actually measure precisely localized activity inside the brain generated renewed interest by a wider community of researchers. The results of these experiments have shown a broad agreement with the psychological, psychophysiological and the experiments performed on monkeys.
http://en.wikipedia.org/wiki/Attention

And are you suggesting electrode recording is somehow passe?

http://www.the-scientist.com/2009/10/1/57/1/
 
Last edited by a moderator:
  • #199
Lievo said:
So are you saying strong emergence is not computationnal?

Yes, by design Turing machines are isolated from global influences. Their internal states are simply informational, never meaningful. Are you unfamiliar with Searle's Chinese Room argument for example?

BTW, I don't accept the weak~strong dichotomy as it is being used here because it too hardwires in the very reductionist assumptions that are being challenged.

As I have said, the entire system is what emerges. So you have a development from the vaguely existing to the crisply existing. But if weak = vague, and strong = crisp, then perhaps you would be making a fair translation.
 
  • #200
Ken G said:
All the same, every single one of those is an assumption involved in a modeling approach, not a single one is an axiom (because they are not self-evident), and not a single one has convincing evidence to favor it. They are just choices made by the scientist to make progress.

I agree. An axiom is just an assumption formulated for the purposes of modelling. Calling it self-evident is just another way of saying I can't think of anything better at the moment.

Mathematicians of course have often believed they were accessing Platonic truth via pure reason. But scientists follow the pragmatist philosophy of CS Peirce.
 
Back
Top