Neural correlates of free will

In summary, Benjamin Libet's work suggests that our decisions to act occur before our conscious awareness of them. This problem for the idea of free will is that it seems to imply an either/or battle between determinism and free will. Some people might try adopting the approach that the neurological correlates of free will are deterministic (if one does wish to adopt a kind of dualistic picture where all that is physical is deterministic and free will is housed in some extra-physical seat of conscious choice). Others might look critically at the very assumption that physically identifiable processes are deterministic in some "absolutely true" way, such that they could preclude a concept of free will.
  • #176
"whole greater than the sum of parts" to me means that (for instance) two people who take 2 hours to paint a house alone can paint it in :45 minutes (instead of 1 hour.. i.e. double the people doesn't mean half the time.. it actually makes more productivity because there's a synergistic effect). Energy and mass are still conserved (it still takes just as much paint, just as many paint strokes) but now the guy on the ladder doesn't have to climb up and down every time, since his partner doing the lower level can hand stuff to him.

In the same vein, as I double the number of neurons in my systems, the lifetime goes up exponentially (twice the neurons is four times the lifetime, for instance).
 
Physics news on Phys.org
  • #177
SPECULATION:

it seems to me that the holistic quantity is information.

But since (in principle) we can convert information to energy:

http://www.nature.com/nphys/journal/v6/n12/full/nphys1821.html

It means that there would be something to work out between conservation of energy and information holism that's not immediately obvious.

So perhaps the Energy+Information+Mass balance in the universe must remain constant, but the information has two forms, just like energy: useless and useful. Useless information is entropy, and entropy must increases, but it is being compensated by a loss in useful information (information that acts, as in the demonstration above, to power the system).

Of course, from a human perspective, we're gaining useful information and compensating by putting useless information into the universe (entropy), but the universe as a whole continues to lose useful information, gaining entropy, diffusing towards heat death.
 
Last edited by a moderator:
  • #178
Q_Goest said:
Let’s take the lead you provided from Kim regarding mental states (M) and physical states (P). For the causal closure of the physical, there are physical events P that determine other physical events. The mental events M are supervenient on the physical states but they don’t cause physical states. What causes physical states, assuming the causal closure of the physical, are other physical states. So the hypothetical neuroscientist that knows everything there is to know about our nervous system, can tell you what physical state P2 will follow physical state P1 (or what set of potential physical states will follow P1 if there is some random nature to them). Mental states that are described as phenomenal states are therefore epiphenomenal on the physical state. The mental state doesn’t cause the physical state, the physical states are caused by other physical states.

Epiphenomenal however, means that not only do these mental states not cause physical states, they also don’t influence them. They don’t have any way of influencing or causing a change in a physical state. If mental states were being “measured” by the physical state, they would no longer be epiphenomenal, they would suddenly become part of the causal chain that created the following physical state, so epiphenomenal in this regard means they really have no influence whatsoever over any physical state. So the paradox is, how do we know these states exist? The only reason given is that there is a 1 to 1 relationship between P and M, but that means we aren’t saying that we experience qualia because we actually experience that qualia. It says we are saying we experience something because of the physical states that cause us to utter those words.

OK, so here we have a view of reality that ends up arguing for a paradox. Which is why the intelligent response is to go back to the beginning and work on a different view, not to spend the rest of your life telling everyone you meet, "but this is the truth". That would be the crackpot response, wouldn't it?

Now it seems pretty transparent where the problem lies. If you start out assuming a definite separation between physical states and mental states, then it is no surprise that this is also the conclusion you end up with. And more subtly, you are even presuming something in claiming "states".

So let's start over. First we have to drop the idea of states because that already hardwires in a reductionist perspective. A state is something with spatial extent, but not a temporal extent. It is a term that already precludes change. It is the synchronic view of "everything that is the case at this moment"

A systems view is one that specifically includes time, change, potential, development. So of course if you analyse reality in terms of states, you cannot take a systems view of reality. You have not proved that the systems view fails, just that you did not understand what the systems view was.

The systems view equivalent of the notion of "state" would be an "equilbrium". That is a state where there is change that does not make a change. So you have an extended spatiotemporal view that is completely dynamic, but also at rest in some useful sense.

So your arguments cannot just hinge on a naive notion of a state here. That is the first point.

Second, P "states" are tales of material causality. And yes we expect the tales to be closed. This is a now standard physicalist presumption, and it works very well. So I am happy to take it as my presumption too.

I then, as said, make a distinction between varieties of physicalism.

There is the familiar reductionist physicalism of atomism - reality is constructed bottom-up from a collection of immutable parts. Though as also argued, reductionism does smuggle in global constraints as its immutable physical laws, and other necessary ingredients, such as entropic gradients, a spacetime void, etc.

Let's call this variety of physicalism Pr (because giving things this kind of pseudo-mathematical terminology seem more impressive).

Then there is the second systems model of physicalism - let's call it Ps. This, following Aristotle and many other systems thinkers, recognises holism. Realities are also made of their global constraints which act downwards to shape the identity and properties of their parts (by restricting their local degrees of freedom).

And as said, because even Pr smuggles the notion of global constraints into its simpler ontology, we can say {Ps {Pr}}. Reductionism is formally a subset of holism. It is holism where the top-down contraints have become frozen and unchanging, leaving only the localised play of atoms, or efficient causes.

You personally may disagree that Ps is a valid model of material causality, but you have yet to make any proper argument against it (I don't think you actually even understand it enough).

So on to M states. Again, you have to recognise the extra constraints implied by the very word "state". Consciousness has a rich temporal structure (we know this experimentally, Libet is part of the evidence). So it is not legitimate to hardwire your conclusions into your premises by presuming "M states" as an ontic category.

We must thus step back to the general metaphysical dichotomy of physical and mental (matter~mind). What do the terms properly denote?

We have already agree (I think) that P is a closed tale of material causes. And it can be seen that we are also presuming that it is an "objective" view. It is somehow what "actually exists out there", even though being good philosophers, we have long come to realize the map is not the territory and we are in fact only modelling the world. So it is what Nozick rightly calls the maximally invariant view - the supposed god's eye "view from nowhere".

So physicalism actually embeds further presumptions. It acknowledges its roots in subjectivity and becomes thus an epistemological device. It says this is how we model in a certain way.

The "material world of closed causality" - either Pr or Ps - is not actually the ontological view, just a view of ontology! P implies M. Or {M{P}}. Or indeed {M{Ps{Pr}}}

Now what in turn is properly denoted by "mental". Well it starts as everything that is so far as we are concerned. That is all there is really, as the only way we know anything is through being a mind.

But when used as part of a metaphysical dichotomy, the idea of mental, as opposed to physical, is trying after some more constrained meaning. It is trying to get at something which stands in contrast to our idea of the physical. So what? And what legitimately?

One of the obvious distinctions is between the observed and the observer, the interpreted and the interpreter, the modeled and the modeller. The very existence of a "done to" implies also the existence of a "doer". So there is a mind acting, and then the physical world it is acting upon.

And clearly a causal relationship is being suggested here, an interaction. I do the modelling and the world gets modeled. But I can also see the world is driving my modelling because of what happens when I wrongly model it.

So the everyday notion of the mental is about this contrast, and one that is still plainly causal. A connection is presumed as quite natural. So far the dichotomy seems natural, legitimate, and not paradoxical.

But then along come the philosophers who want to push the distinction further - to talk about res cogitans and res extensa, about qualia, about Ding an sich.

What were complementary aspects of an equilbrium seeking process (a systems view of the mind as a pragmatic distinction between the observers and the observed) suddenly becomes treated as different fundamental categories of nature. The distinction becomes reified so that there is the P and the M as axiomatically disconnected realms - where now a connection has to be forged as a further step, and not being able to do so becomes treated as a metaphysical paradox.

So yes, P~M has a social history as an idea. And the assumptions made along the way have got buried.

The "mental" properly refers to the fact that reality can become complexly divided into actors and their actions, models and the modeled, the subjective experience that is our everything and the objective stance that is our attempt to imagine an invariant, god's eye, view of "everything" (which is actually a view constructed of general theories - or formalised descriptions of global constraints - and the predictions/measurements that animate these theories, giving them their locally-driven dynamics).

So P here becomes a judgement of the degree of success we feel in modelling reality in terms of fundamental theories - theories describing reality's global constraints. And Ps is a more complete approach to modelling than Pr, but Pr is also the simpler and easier to use.

M is then epiphenomenal in the sense it is all that is not then part of this model - and so it stands for the modeller. It is not epiphenomenal by necessity - everything is actually just subjective experience in the end. But it is epiphenomenal by choice. We put the M outside the P so as to make the P as simple as possible. It is a pragmatic action on our part.

Now Pr quite clearly puts M way outside because it does away with observers, modellers, and other varieties of global constraint (as explicit actors in the dynamics being modeled). So Pr becomes a very poor vehicle for the pragmatic modelling of "mind" - of systems which in particular have non-holonomic constraints and so have active and adaptive top-down control over their moment-to-moment "mental states".

But with Ps, you can start to write formal models of observers and the observed. You can't model "the whole of M" as even Ps remains within M. This is the irreducible part of the deal. Nothing could invert the relationship so far a M is concerned. Yet within M we can have the Ps-based models of observer~observed relationships. And indeed I've referred frequently to the work of Friston (Bayesian brains), Rosen (modelling relations), Pattee (epistemic cut), as examples of such systems-based modelling.

So M - Ps = M'. We can explain away a lot via physicalist modelling, yet there will still be a final residue. But it is not the M that is epiphenomenal to the P. Rather the other way round. The mind does not have to have models based on physicalist notions of closed systems of entailment to exist. It existed already. And it created the P that claims to exist as causally isolated from the subjective wishes, whims and desires of the M.
 
Last edited:
  • #179
Hi Q_Goest,

Q_Goest said:
Let’s take the lead you provided from Kim regarding mental states (M) and physical states (P). For the causal closure of the physical, there are physical events P that determine other physical events. The mental events M are supervenient on the physical states but they don’t cause physical states. What causes physical states, assuming the causal closure of the physical, are other physical states. So the hypothetical neuroscientist that knows everything there is to know about our nervous system, can tell you what physical state P2 will follow physical state P1 (or what set of potential physical states will follow P1 if there is some random nature to them). Mental states that are described as phenomenal states are therefore epiphenomenal on the physical state. The mental state doesn’t cause the physical state, the physical states are caused by other physical states.

apeiron said:
If you start out assuming a definite separation between physical states and mental states, then it is no surprise that this is also the conclusion you end up with. And more subtly, you are even presuming something in claiming "states".

Maybe another way to express apeiron's comment is the following:
Please give a look again at http://en.wikipedia.org/wiki/Gun_(cellular_automaton)" or would you say it doesn't, because all states just follows from a previous state?

I guess you would say there's a causal link despite this link can also be described in terms of P1 to P2 transitions. Of course we don't know if free will follows the same trick, but don't you think this analogy still demonstrate the lack of logical impossibility between determinism and free will?
 
Last edited by a moderator:
  • #180
Hi apeiron. Thanks very much for trying to explain. Seriously. I appreciate you attempting to thoughtfully and carefully bring out those views that you feel are pertinent to this portion of the discussion. I’d like to better understand your views, so I appreciate you taking the time to try and explain them. Perhaps you could start a thread that highlighted those views and provide references so I can dig deeper. That said, I honestly can’t make heads or tails of your post. Take this for instance.
A state is something with spatial extent, but not a temporal extent. It is a term that already precludes change.
I don’t see how physical states preclude change. Physical states exist in both space and time. In http://en.wikipedia.org/wiki/Phase_space#Thermodynamics_and_statistical_mechanics" for example, if a system consists of N particles, then a point in the 6N-dimensional phase space describes the dynamical state of every particle in that system, as each particle is associated with three position variables and three momentum variables. In this sense, a point in phase space is said to be a microstate* of the system. Physical states require both dimensional and temporal information to describe them, so I don’t know why one would claim that physical states don’t have a “temporal extent”. I don’t know what that means.

I’d like to honestly understand why there are people that feel the nonlinear approach (dynamics approach, systems approach, etc…) such as Alwyn Scott, Even Thompson, many others, is so appealing, but these are not mainstream ideas. Would you not agree? The mainstream ideas surrounding how consciousness emerges regards computationalism which doesn’t seem to fit with this other approach. From where I sit, weak emergence and separability of classical systems are well founded, mainstream ideas that are far from being overturned. They are used daily by neuroscientists that take neurons out and put them in Petri dishes and subject them to controlled experiments as if they were still in vivo. Then they compare this reductionist experiment with the brain and with computer models which are clearly only weakly emergent. So what is it that is really being promoted by this systems approach?

*The microstates of the system are what Bedau is referring to when he defines weak emergence.

Lievo said:
Please give a look again at http://en.wikipedia.org/wiki/Gun_(cellular_automaton)" or would you say it doesn't, because all states just follows from a previous state?

I guess you would say there's a causal link despite this link can also be described in terms of P1 to P2 transitions. Of course we don't know if free will follows the same trick, but don't you think this analogy still demonstrate the lack of logical impossibility between determinism and free will?
It is perfectly acceptable in laymen’s terms, to say that the gun caused the emission of the spaceship. I talk to other engineers about how a valve causes a pressure drop in a fluid flowing through it. Certainly the valve has no 'free will' just as the gun in the Game of Life (GoL) has free will to create a spaceship. The point being that laymen terms are not applicable to what is the ‘efficient’ cause. Yes, causes can go right down to some most primitive particle, and hence it is the desire of some physicists to find a “theory of everything”. One can say the gun caused the spaceship but as weak emergence would have it, the ability to emit a spaceship is dependant on the ability of individual cells in the game of life to change state from white to black and back again which is a function of the rules to the game, just as lower level physical laws are the ‘rules’ by which higher level phenomena appear. Even classical mechanics is taken to be 'emergent' on the interactions of many molecules, atoms or particles just as the gun and spaceship in the GoL emerge from the interactions of the individual cells. However, separability breaks down at the level of molecular interactions and below. Somewhere between the classical scale and the QM scale, there must be a change in the basic philosophy of how to treat nature, and how to treat causation.


Oh... and by the way. I (think I) agree with both of you regarding the fundamental issue that the knowledge paradox seems to flounder on. The problem starts out by using as an axiom that phenomenal states are not physically describable. They are not physical states. Once you define qualia that way, you may as well go whole hog and admit that the causal closure of the physical is false. These two axioms are at odds which is why there is a paradox.
 
Last edited by a moderator:
  • #181
Q_Goest said:
I don’t see how physical states preclude change. Physical states exist in both space and time. In http://en.wikipedia.org/wiki/Phase_space#Thermodynamics_and_statistical_mechanics" for example, if a system consists of N particles, then a point in the 6N-dimensional phase space describes the dynamical state of every particle in that system, as each particle is associated with three position variables and three momentum variables. In this sense, a point in phase space is said to be a microstate* of the system. Physical states require both dimensional and temporal information to describe them, so I don’t know why one would claim that physical states don’t have a “temporal extent”. I don’t know what that means.

So you would disagree with the Wiki definition of states in classical physics as " a complete description of a system in terms of parameters such as positions and momentums at a particular moment in time"?
http://en.wikipedia.org/wiki/State_(physics )

All you are saying in pointing out that 6 dimensions can capture "all" the dynamics of a particle is that this is the way dynamics can be modeled in reductionist terms. You can freeze the global aspects (the ones that would be expressed in time) and describe a system in terms of local measurements.

Yet we know from QM that the position and momentum cannot be pinned down with this arbitrary precision - this seems a very strong ontological truth, no? And we know from chaos modelling that a failure to be able to determine initial conditions means that we cannot actually construct the future of a collective system from a 6N description. And we know from thermodynamics that we cannot predict the global attractor that will emerge in such a 6N phase space even if we did have exactly measured initial conditions - the shape of the attractor can only emerge as a product of a simulation. Etc, etc.

So we know for many reasons that a state based description of reality is a reduced and partial model good for only a limited domain of modelling. To then use it as the unexamined basis of philosophical argument is a huge mistake. Even if as you argue, it is a "mainstream" mistake.

You probably still don't understand why states are the local/spatial description and exclude global temporal development. But "at a particular moment in time" seems a pretty clear statement to me. It is the synchronic rather than diachronic view. Surely you are familiar with the difference?

I’d like to honestly understand why there are people that feel the nonlinear approach (dynamics approach, systems approach, etc…) such as Alwyn Scott, Even Thompson, many others, is so appealing, but these are not mainstream ideas. Would you not agree? The mainstream ideas surrounding how consciousness emerges regards computationalism which doesn’t seem to fit with this other approach. From where I sit, weak emergence and separability of classical systems are well founded, mainstream ideas that are far from being overturned. They are used daily by neuroscientists that take neurons out and put them in Petri dishes and subject them to controlled experiments as if they were still in vivo. Then they compare this reductionist experiment with the brain and with computer models which are clearly only weakly emergent. So what is it that is really being promoted by this systems approach?

Perhaps your area of expertise is computer science and so yes, this would not be the mainstream view in your world. But I am not sure that you can speak for neuroscience here. In fact I know you can't.

I have repeatedly challenged you on actual neuroscience modelling of the brain, trying to direct your attention to its mainstream thinking - the effects on selective attention on neural receptive fields has been one of the hottest areas of research for the last 20 years. But you keep ducking that challenge and keep trying to find isolated neuron studies that look comfortably reductionist to you.

You just don't get the irony. Within neuroscience, that was the big revolution of the past 20 years. To study the brain, and even neurons and synapses, in an ecologically valid way. Even the NCC hunt of consciousness studies and the brain imaging "revolution" was based on this.

People said we have been studying the brain by isolating the components. And it has not really told us what we want to know. We stuck electrodes into the brains of cats and rats. But they were anaethetised, not even conscious. And it was single electrodes, not electrode arrays. But now (around 20 years ago) we have better equipment. We can record from awake animals doing actual cognitive tasks and sample activity from an array of regions. Even better, we can stick humans in a scanner and record the systems level interactions.

Yet you say the mainstream for neuroscience is people checking the electrical reponses of disected neurons in petri dishes, or IBM simulations (gee, you don't think IBM is just about self-promotion of its supercomputers here?).

I used to write for Lancet Neurology, so I think I have a better idea of what is mainstream in neuroscience.

Again, remember that my claim here is not that reductionism (the computer science view of life) is wrong. Just that it is the subset of the systems view you arrive at when you freeze out the issue of global constraints. It is the adibiatic view. Where the larger Ps model also has to be able to deal with the non-adibiatic story - where global constraints actually develop, evolve, change, in time.
 
Last edited by a moderator:
  • #182
Q_Goest said:
Oh... and by the way. I (think I) agree with both of you regarding the fundamental issue that the knowledge paradox seems to flounder on. The problem starts out by using as an axiom that phenomenal states are not physically describable. They are not physical states. Once you define qualia that way, you may as well go whole hog and admit that the causal closure of the physical is false. These two axioms are at odds which is why there is a paradox.

Or instead, you could recognise that you had made a wrong move in assuming P and M to be ontologically separate (rather than epistemically separable - big difference).

The axiom that P and M are separate can be false, and yet the axiom that P is close is true.

And you again seem to be missing the point that axioms are epistemological assertions of modelling convenience rather than statements of ontological truth. They are "truths" that seem reasonably in the grounds of generalised experience rather than truths that are known to be true due to some magical kind of direct revelation.
 
  • #183
Q_Goest said:
Perhaps you could start a thread that highlighted those views and provide references so I can dig deeper.
You're asking apeiron to provide references to support his claims... dude you like to live with risk! :biggrin:

Q_Goest said:
The point being that laymen terms are not applicable to what is the ‘efficient’ cause.
I don't think I get your point here. What is the difference you see between layman causality and efficient causality?

Q_Goest said:
One can say the gun caused the spaceship but as weak emergence would have it, the ability to emit a spaceship is dependant on the ability of individual cells in the game of life to change state from white to black and back again which is a function of the rules to the game
Or one can say that the gun is an algorithm, which indeed it is (and a simple one: it's just a periodic oscillator), thus the behavior does not need to tied with the particular rule of CGL: any system mathematically equivalent to these guns is in deep the same system. So if we afford to say that the gun cause the spaceship, then the causality is in fact the identity to a Turing machine. That said, a non trivial consequence is that free will is a set of algorithm, define as those who can behave as something we will recognize has having free will. (...) It's late I'm becoming unclear I guess. See you. :smile:
 
Last edited:
  • #184
  • #185
Hi apeiron. I found that last post to be totally understandable. Not like the previous post at all. I really wonder how you manage to switch the flowery talk on and off like that. No offense intended.

apeiron said:
Yet we know from QM that the position and momentum cannot be pinned down with this arbitrary precision - this seems a very strong ontological truth, no?
But pinning down particles isn't important to classical mechanics. Sure, the real world isn't classical, but that's not the point. The conventional view is that quantum mechanics isn't a factor in how the brain works because there are sufficient statistical aggregates of particles that individual particles don't matter. They're simply averaged together. Is this "systems view" dependent on individual particles? Clearly Alwyn Scott for example, makes the point as do many others that classical mechanics has this 'more than the sum' feature already intrinsic to it and I’d think Scott’s views probably mirror your ideas fairly closely.

And we know from chaos modelling that a failure to be able to determine initial conditions means that we cannot actually construct the future of a collective system from a 6N description.
But is that because of not having the initial conditions of the individual particles? Or not having the initial conditions of the classically defined states? Density, internal energy, entropy, etc... of the system is obviously independent of specific individual particle states, but not of the aggregate. So not knowing individual particle states will lead to indeterminate future states, and yes, the classical model is a model. But one has to show that there is a "meaningful difference" between having initial particle conditions and having initial classical conditions. I see no meaningful difference. Sure the classical approach isn't exact, but it is exact to the degree you have the initial conditions of the classical states and that's what's important unless the quantum mechanical states are being roped into causing different classical states by downward causation which isn't possible.

And we know from thermodynamics that we cannot predict the global attractor that will emerge in such a 6N phase space even if we did have exactly measured initial conditions - the shape of the attractor can only emerge as a product of a simulation. Etc, etc.
Can you provide an example of a global attractor? One that regards classical mechanics and can't be predicted by weak emergence? This is fundamentally where we disagree. I would say Benard cells are a perfect example of a weakly emergent structure, and I’d contest that’s a mainstream idea not just my own. Regarding mainstream, perhaps that feels like a knock so instead I’ll say “cutting edge” or something. Anyway, as Davies points out.
Thus we are told that in Benard instability, … the molecules organize themselves into an elaborate and orderly pattern of flow, which may extend over macroscopic dimensions, even though individual molecules merely push and pull on their near neighbors. This carries the hint that there is a sort of choreographer, an emergent demond, marshalling the molecules into a coherent, cooperative dance, the better to fulfil the global project of convective flow. Naturally this is absurd. The onset of convection certainly represents novel emergent behavior, but the normal inter-molecular forces are not in competition with, or over-ridden by, novel global forces. The global system ‘harnesses’ the local forces, but at no stage is there a need for an extra type of force to act on an individual molecule to make it comply with a ‘convective master plan’.
Also from Davies
Strong emergence cannot succeed in systems that are causally closed at the microscopic level, because there is no room for additional principals to operate that are not already implicit in the lower-level rules.
However Davies does allow for some kind of emergence at the border between classical and quantum mechanics, which is where separability breaks down also.

You probably still don't understand why states are the local/spatial description and exclude global temporal development. But "at a particular moment in time" seems a pretty clear statement to me. It is the synchronic rather than diachronic view. Surely you are familiar with the difference?
No, I'm not. Feel free... Regardless, why should it be controversial to suggest that there exists a physical reality at a particular moment in time? If the argument drops into quantum mechanics, there’s no point in arguing. At that point, we have to suggest that neurons interact due to some quantum mechanical interaction, which isn’t worth arguing about.

Perhaps your area of expertise is computer science and so yes, this would not be the mainstream view in your world. But I am not sure that you can speak for neuroscience here. In fact I know you can't.
I guess I disagree.

I have repeatedly challenged you on actual neuroscience modelling of the brain, trying to direct your attention to its mainstream thinking - the effects on selective attention on neural receptive fields has been one of the hottest areas of research for the last 20 years. But you keep ducking that challenge and keep trying to find isolated neuron studies that look comfortably reductionist to you.
I think you misunderstand. My daughter has selective attention. I have no doubt they influence neural receptive fields! lol But that’s like saying the spaceship is caused by the gun that Lievo keeps talking about. Unless you can clearly define selective attention and neural receptive fields, it won’t help.

You just don't get the irony. Within neuroscience, that was the big revolution of the past 20 years. To study the brain, and even neurons and synapses, in an ecologically valid way. Even the NCC hunt of consciousness studies and the brain imaging "revolution" was based on this.

People said we have been studying the brain by isolating the components. And it has not really told us what we want to know. We stuck electrodes into the brains of cats and rats. But they were anaethetised, not even conscious. And it was single electrodes, not electrode arrays. But now (around 20 years ago) we have better equipment. We can record from awake animals doing actual cognitive tasks and sample activity from an array of regions. Even better, we can stick humans in a scanner and record the systems level interactions.

Yet you say the mainstream for neuroscience is people checking the electrical reponses of disected neurons in petri dishes, or IBM simulations (gee, you don't think IBM is just about self-promotion of its supercomputers here?).
I don’t disagree with anything you said here except that I’m sure IBM isn’t somehow influencing neuroscience with profits from their computers. Sure, we’ve made progress in understanding how brains work, just as you say. That’s all the kind of work that’s necessary for the reductionist approach. I suspect you intend to mean that all this experimentation is unique somehow to a systems approach, but I don’t see why.
 
  • #186
apeiron said:
I supplied Q Goest with the references he requested long ago. Perhaps he did not read them? That often happens here doesn't it :wink:?

https://www.physicsforums.com/showpost.php?p=2501587&postcount=7
Not sure why you brought up that link.
You quoted Bedau who only accepts weak emergence as I've pointed out.
You quoted Emmeche who rejects strong downward causation and his medium downward causation is frighteningly like weak downward causation. It either has to drop into one or the other category, which isn't clear.
You quoted yourself on Physicsforums
You quoted Google
And this one: http://www.calresco.org/
and a handful of other web sites. I don't want to read web sites though except perhaps the Stanford Dictionary of Philosophy or maybe Wikipedia.

I guess we should just disagree and leave it at that.
 
Last edited by a moderator:
  • #187
Lievo said:
I don't think I get your point here. Why do you agree to attribute gun with freedom but not valves,
I'm not attributing the gun with freedom (free will). I'm saying it has just as much as the valve, which is none.

Lievo said:
and what is the difference you see between layman causality and efficient causality?
Layman causality is not efficient causality.
 
  • #188
apeiron said:
I supplied Q Goest with the references he requested long ago. Perhaps he did not read them? That often happens here doesn't it :wink:?
Perhaps he had a look and the references were not supporting your claims unless a lot of creativity was involved. That sometime happens, doesn't it :wink:?
 
Last edited:
  • #189
Q_Goest said:
Layman causality is not efficient causality.
That what you said and I perfectly understood that it was what you said. My question again: please explain why you think there is a difference.
 
Last edited:
  • #190
Q_Goest said:
I’d like to honestly understand why there are people that feel the nonlinear approach (dynamics approach, systems approach, etc…) such as Alwyn Scott, Even Thompson, many others, is so appealing, but these are not mainstream ideas. Would you not agree? The mainstream ideas surrounding how consciousness emerges regards computationalism which doesn’t seem to fit with this other approach. From where I sit, weak emergence and separability of classical systems are well founded, mainstream ideas that are far from being overturned.

This seems like a contradiction to me, nonlinear approach, systems approach. i.e. complex systems approach isn't at odds with computationalism (and is indeed part of it). Any time you read about spiking networks, you're reading about physiologically derived neuron models that are inherently nonlinear and give rise to chaos when coupled together with biologically derived coupling terms (excitatory, inhibitory, diffusive). Since there are several operating in a network, you have a complex system. Yes, this is mainstream (but in a nascent manner).

Evidence that it's mainstream:

Journals

Physics Review E now includes biological systems:
http://pre.aps.org/
neuroscience in that journal:
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2713719/

AIP: Chaos: Journal of Interdisciplinary Science (you should actually read this:)
http://chaos.aip.org/about/about_the_journal

A neuroscience example from Chaos:
http://chaos.aip.org/resource/1/chaoeh/v18/i2/p023102_s1?isAuthorized=no

Neuroscience:
http://www.ncbi.nlm.nih.gov/pubmed/10362290

Chemical Physics and Physical Chemistry:
http://onlinelibrary.wiley.com/doi/10.1002/cphc.200500499/fullPublished Authors:

Wulfram Gerstner (head of Computational Neuroscience department at the Mind-Brain Institute in Laussane, Switzerland)
http://icwww.epfl.ch/~gerstner//BUCH.html

Ermentrout:
http://www.pitt.edu/~phase/

Eugene Izhikevich's:
http://www.braincorporation.com/

Izhikevich wrote the textbook, "Dynamical Systems in the Neurosciences"
http://www.izhikevich.org/publications/dsn.pdf

There's Tsumoto:
http://pegasus.medsci.tokushima-u.ac.jp/~tsumoto/achieve/index-e.htmlIt's the complex global behavior that is unpredictable (which is why we have to run simulations, then interpret them geometrically: i.e. pretty pictures that we interpret from our personal experiences and what we know experimentally about the neurons). There's no "deterministic solution" to resulting complex behavior. Yet, it still can be quantified with average lifetimes, finding the different qualitative regimes through bifurcation analysis, etc, etc. You can go back and look at a bunch of numbers that the code spits out, but they're not very meaningful in the standard mathematical analysis. We need Poincare's geometric approach.
 
Last edited by a moderator:
  • #191
Q_Goest said:
Hi apeiron. I found that last post to be totally understandable. Not like the previous post at all. I really wonder how you manage to switch the flowery talk on and off like that. No offense intended.

Do you have examples of this flowery language? Systems science being an interdisciplinary subject, there is a problem that there are a lot of different jargons - ways of saying the same things coming from different fields.

But pinning down particles isn't important to classical mechanics. Sure, the real world isn't classical, but that's not the point. The conventional view is that quantum mechanics isn't a factor in how the brain works because there are sufficient statistical aggregates of particles that individual particles don't matter. They're simply averaged together. Is this "systems view" dependent on individual particles? Clearly Alwyn Scott for example, makes the point as do many others that classical mechanics has this 'more than the sum' feature already intrinsic to it and I’d think Scott’s views probably mirror your ideas fairly closely.

QM shows there is a problem even at the fundamental physical level. And the systems argument is that the same thing is going on over all scales of analysis.

Let me remind you what the systems approach actually is.

It says systems are formed hierarchically from two complementary kinds of causality - bottom-up construction and top-down constraint. And they are "a system" in that they are mutual or synergistic in their causality. Each is making the other. And so the develop (emerge) together in holistic fashion.

You are arguing from the reductionist viewpoint where there is a definite atomistic grain to reality. You have a bunch of stuff that already exists (it is not emergent) and it constructs some kind of global order (the forms emerge from the materials).

The reductionist story cannot of course address the reasons why the local atomistic grain exists. And while a "global state" might emerge, this is not the same as the global constraints of a system.

The temperature or pressure of an ideal gas is a macroscopic measurement, not a constraint. The constraints of the ideal gas would be the walls of the container, the external bath that keeps the system at a constant equilibrium, etc. All the general conditions that allow the system to have "a condition".

The systems approach instead says all that exists locally are degrees of freedom. Well, exist is too strong a word for random fluctuations - an unlimited number of degrees of freedom. So there is nothing definite at the scale of the local grain at the beginning.

But then unconstrained fluctuations lead to randomly occurring interactions and at some point constraints start to form as a result. Constraints have the effect of limiting local degrees of freedom. There is suddenly the beginnings of less freedom in the fluctuations and so a more definite direction to their action. The global constraints in turn become more definite, feeding back again on the local freedoms and the whole system undergoes a phase transition to a more ordered state.

This is a model familiar from many fields - Peircean semiotics, spin glasses, generative neural nets, Haken's synegetics, second order cybernetics, Hofstadter's strange loops, Salthe's hierarchy theory, Ulanowicz's ecological ascendancy, etc.

The key point is that there is now no definite local atomistic grain. It is part of what must emerge. It is like solitons or standing waves. The "particles" composing the system are locally emergent features that exist because of global constraints on local freedoms.

So QM says there are no locally definite particles until there is an act of observation - a decoherence of the wavefunction by an observing world.

And the same at the neuroscience level. A neuron's receptive field is constrained first by a developmental history (learning over time) and then even by attentional effects (top-down shaping over the course of 100 ms or so).

So reductionism gives you the frozen, static, simple description of a system. The local grain just exists (whereas in the systems view it is shaped by constraint). And the global constraints are unchanging (whereas in the systems view, they are part of what has to self-organise).

Consider the parallels with genetic algorithms. A pool of code (representing many local degrees of freedom) is forced to self-organised by imposing some general global constraints. A program is evolved. (This is not exactly what I am talking about, just an illustration that might be familiar to you.)

As to Scott, it is a long time since I spoke to him or read his book, so I can't actually remember how close he is to my current views. Though from dim memory, I think he was a little more simplistic - like John Holland and others pushing the Santa Fe view of complexity that was vogue in the US at that time. (Gell-Mann and Kauffman were deeper IMO)

But is that because of not having the initial conditions of the individual particles? Or not having the initial conditions of the classically defined states? Density, internal energy, entropy, etc... of the system is obviously independent of specific individual particle states, but not of the aggregate. So not knowing individual particle states will lead to indeterminate future states, and yes, the classical model is a model. But one has to show that there is a "meaningful difference" between having initial particle conditions and having initial classical conditions. I see no meaningful difference. Sure the classical approach isn't exact, but it is exact to the degree you have the initial conditions of the classical states and that's what's important unless the quantum mechanical states are being roped into causing different classical states by downward causation which isn't possible.

If everything is nicely "linear" - static and unchanging at the atomistic level due to static and unchanging global constraints - then coarse-graining can be good enough for modelling.

But the discussion was about systems that are complex and developing - such as brains. Where the global constraints are, precisely, non-holonomic. Where the local grain (for example, neural receptive fields) are dynamically responsive.

Taking an ideal gas as again a standard classical physics model of a system, note that we can impose a temperature on the system, but we cannot determine the individual kinetic freedoms of the particles. Just constrain them to a gaussian distribution.

So this is coarse graining in action. The individual motions are unknown and indeed unknowable (Maxwell's Demon). But they are constrained to a single statistical scale - that of the system's now atomistic microstates. And we can calculate on that basis. We have frozen out the sources of uncertainty (either local or global) so far as our modelling is concerned.

Can you provide an example of a global attractor? One that regards classical mechanics and can't be predicted by weak emergence? This is fundamentally where we disagree. I would say Benard cells are a perfect example of a weakly emergent structure, and I’d contest that’s a mainstream idea not just my own. Regarding mainstream, perhaps that feels like a knock so instead I’ll say “cutting edge” or something. Anyway, as Davies points out.

I gave you the Collier reference on Benard cells. Can you instead supply me with a reference where the global organisation was predicted purely from a local model of the molecules thrown together?

...I'll answer the rest of your points later...
 
  • #192
Lievo said:
Perhaps he had a look and the references were not supporting your claims unless a lot of creativity was involved. That sometime happens, doesn't it :wink:?

Sometimes people say that is what happened - yet strangely cannot then support their opinion in public. At least Q Goest argues his corner. If you were more confident of your views, perhaps you would too?
 
  • #193
Q_Goest said:
Also from Davies

However Davies does allow for some kind of emergence at the border between classical and quantum mechanics, which is where separability breaks down also.

Davies correctly says that global constraints are not some extra force. Force is a localised, atomised, action - efficient causality. Constraints are just constraints. They might sound "forceful" because they act downwards to constrain the local degrees of freedom (as I say, shape them to have some distinct identity). But they are a complementary form of causality. A constraining or limiting action, not a constructive or additive one.

He is also right in saying that you cannot have constraints emerging if the analysis only recognises the micro-scale. If you close off causality at the microscale in your modelling, there is indeed no room for anything else.

No, I'm not. Feel free... Regardless, why should it be controversial to suggest that there exists a physical reality at a particular moment in time? If the argument drops into quantum mechanics, there’s no point in arguing. At that point, we have to suggest that neurons interact due to some quantum mechanical interaction, which isn’t worth arguing about.

The synchronic~diachronic dichotomy is used frequently in the emergence literature - as in the Bedau paper you appear to prefer.

http://people.reed.edu/~mab/papers/principia.pdf

This is not about QM issues but the general modelling of structures and processes - systems modelling.

I think you misunderstand. My daughter has selective attention. I have no doubt they influence neural receptive fields! lol But that’s like saying the spaceship is caused by the gun that Lievo keeps talking about. Unless you can clearly define selective attention and neural receptive fields, it won’t help.

What is undefined about the concepts of selective attention and neural receptive fields in the literature?

And Lievo's guns are precisely not an example of anything I have been talking about. What could be a more reductionist view of reality than CA? He may see "spaceships" and "guns" looking at the rigid operations of a finite state automaton. But that "meaning" is completely absent from the CA itself. It emerges nowhere within it.

Bedau uses these guns to argue for weak emergence. And I agree, so far as emergence goes, it is as weak as can be imagined. I am talking about something else here.

I don’t disagree with anything you said here except that I’m sure IBM isn’t somehow influencing neuroscience with profits from their computers.

I was saying IBM dreams up these kinds of stunts to sell more supercomputers to universities.

Perhaps you haven't been keeping track of the controversies? IBM getting savaged by its own original Blue Brain scientist.

Why did IBM let Mohda make such a deceptive claim to the public?
I don't know. Perhaps this is a publicity stunt to promote their supercompter. The supercomputer industry is suffering from the financial crisis and they probably are desperate to boost their sales. It is so disappointing to see this truly great company allow the deception of the public on such a grand scale.

http://nextbigfuture.com/2009/11/henry-markram-calls-ibm-cat-scale-brain.html
 
  • #194
from apeiron's link (criticizing IBM):

In real life, each segment of the branches of a neuron contains dozens of ion channels that powerfully controls the information processing in a neuron. They have none of that. Neurons contain 10's of thousands of proteins that form a network with 10's of millions of interactions. These interactions are incredibly complex and will require solving millions of differential equations.

This is exactly right, it's not an easy problem. My Textbook:

"From Molecules to Networks: and introduction to cellular and molecular neuroscience"

It is no exaggeration to say that the tasks of understanding how intrinsic activity, synaptic potentials, and active potentials spread through and are integrated within the complex geometry of the dendritic trees to produce the input-output operations of the neurons is one of the main frontiers of neuroscience.

But we do have NEURON and GENESIS to help us with this, using the compartmental models developed by Rall and Shepherd. (Sheperd co-authored this chapters of the textbook).

I think what Q_Goest doesn't recognize is that passive currents (compartment models) only represent intrinsic currents, not the more interesting active currents (i.e. all the different channel dynamics. the feedback circuit between membrane potential and channel activation, the action potential.

Or the whole molecular/genetic behavior part that we've all continued to sweep aside. Neural activity also stimulates changes in genetic expression, so you have to talk about mRNA and transcription factors which modulate the geometry of the dendrites and the strength of both chemical and electrical synapses (by increasing and decreasing channel sites) so there's even more complicated feedback happening at the molecular level.

Yes, we use compartmental models. They're not nearly the whole story.
 
  • #195
Pythagorean said:
this is mainstream (but in a nascent manner).

Evidence that it's mainstream:

apeiron said:
the effects on selective attention on neural receptive fields has been one of the hottest areas of research for the last 20 years.

As a neuroscientist, I wish you both stop what is either poorly supported or wrong view about what is mainstream in neuroscience. Saying that this or this work is compatible with an interpretation is not an evidence that the interpretation is influencial (in other words I agree about the nascent manner but a nascent mainstream is simply self-contradictory). Saying that receptive field has been one of the hottest areas of research in the last 20 is simply wrong. Single units technics in animals have been the gold standard from maybe 1935 to about 1990. Since that what happens is an impressive rise of new technics devoted to recording in humans, mostly fMRI, and these technics can't record receptive field. You may not believe me, in which case simply look at how many papers you can retrieve with either brain + MRI or brain + receptive field in the last 20 y.

Q_Goest said:
However Davies does allow for some kind of emergence at the border between classical and quantum mechanics, which is where separability breaks down also.
Can you explain why separability should breaks down at this border, despite QM is perfectly computable?

Q_Goest said:
that’s like saying the spaceship is caused by the gun that Lievo keeps talking about. Unless you can clearly define selective attention and neural receptive fields, it won’t help.
These two concepts are perfectly and operationnaly defined. What's the problem?

apeiron said:
And Lievo's guns are precisely not an example of anything I have been talking about. What could be a more reductionist view of reality than CA? He may see "spaceships" and "guns" looking at the rigid operations of a finite state automaton. But that "meaning" is completely absent from the CA itself. It emerges nowhere within it.

Bedau uses these guns to argue for weak emergence. And I agree, so far as emergence goes, it is as weak as can be imagined. I am talking about something else here.
I'm sure you well aware that CGl is universal for computation. Meaning that any computable system you may think of can be implemented on it. So are you saying strong emergence is not computationnal? If not, on which basis would you decide that one CGL show or don't show strong emergence?
 
Last edited:
  • #196
We seem to be getting deeply into the neuroscience at this point, which is a perfectly appropriate place to go to study the neural correlates of mental qualia like free will. I would just like to point out at this point, as we have been talking about Pr, Ps, and M states, that we are free to adopt a physicalist perspective, and even choose to assume that physical states actually exist, and further assume that they form a closed system (whether it be Ps or Pr that are the most useful approaches, or whether either approach can be more or less useful in a given context). All the same, every single one of those is an assumption involved in a modeling approach, not a single one is an axiom (because they are not self-evident), and not a single one has convincing evidence to favor it. They are just choices made by the scientist to make progress.

In my view, which may be a minority but is logically bulletproof, there is no reason to imagine that we know that physical states either exist, or are closed, or even that it makes any sense to imagine that either of those are true beyond the usual idealizations we make to get somewhere. What is demonstrably true is that everything we mean by a physical state arises from perception/analysis of a class of experiences, all done by our brains. We can notice that our perceptions are correlated with the concepts we build up around the idea of a physical state, and we gain predictive power by building up those concepts, and not one single thing can we say beyond that. This is just something to bear in mind as we dive into the physicalist perspective, either at a reduced or systems level-- it is not at all obvious that this approach will ever be anything but a study of the neural correlates of mental states, i.e., the mental states may always be something different.
 
  • #197
Lievo said:
As a neuroscientist, I wish you both stop what is either poorly supported or wrong view about what is mainstream in neuroscience. Saying that this or this work is compatible with an interpretation is not an evidence that the interpretation is influencial (in other words I agree about the nascent manner but a nascent mainstream is simply self-contradictory).

Nascent mainstream is not self-contradictory. Nonlinear science is nascent to the mainstream, but it is mainstream (i.e. the work is published in well-known peer-reviewed journals). I don't know what you think my interpretation is; I responded to Q_Goest who (wrongly) put computational models and nonlinear science at odds.

You would agree, I hope, that Hodgkin's Huxley is a (~60 year old) mainstream model. It's a nonlinear model. Popular among computaitonal scientists is the Morris Lecar model (because it's 2D instead of the HH 4D, making phaseplane analysis and large networks much easier to handle).

Theoretical Neuroscience institutes and centers have popped up all over the world in the last 20 years, including exactly the kind of work I'm talking about (you have Redwood at Berkeley, CTN in New York, The Seung Lab at MIT, Computaitonal Neuroscience programs within neuroscience deparments)

We can argue about the semantics of "mainstream" but this a well-funded, productive area of research.

Here... from 2001, TEN years ago...

Neurodynamics: nonlinear dynamics and neurobiology
Current Opinion in Neurobiology
Volume 11, Issue 4, 1 August 2001, Pages 423-430
 
Last edited:
  • #198
Lievo said:
As a neuroscientist, I wish you both stop what is either poorly supported or wrong view about what is mainstream in neuroscience... You may not believe me, in which case simply look at how many papers you can retrieve with either brain + MRI or brain + receptive field in the last 20 y.

You lose credibility with every post.

Check the top 10 papers from Nature Neuroscience over the past 10 years.
http://www.stanford.edu/group/luolab/Pdfs/Luo_NatRevNeuro_10Y_anniv_2010.pdf

Attention effects, neural integration, homeostatic organisation and other forms of global top-down self-organisation feature prominently.

2001 Brainweb 2.0: the quest for synchrony
2002 Attention networks: past, present and future
2004 Homeostatic plasticity develops!
2006 Meeting of minds: the medial frontal cortex and social cognition

Your claim that scanning can't be used to research top-down effects is obvious nonsense.

http://www.jneurosci.org/content/28/40/10056.short
or
http://www.nature.com/neuro/journal/v3/n3/full/nn0300_284.html
or
http://www.indiana.edu/~lceiub/publications_files/Pessoa_Cog_Neurosci_III_2004.pdf

Or as wiki says...

In the 1990s, psychologists began using PET and later fMRI to image the brain in attentive tasks. Because of the highly expensive equipment that was generally only available in hospitals, psychologists sought for cooperation with neurologists. Pioneers of brain imaging studies of selective attention are psychologist Michael I. Posner (then already renowned for his seminal work on visual selective attention) and neurologist Marcus Raichle.[citation needed] Their results soon sparked interest from the entire neuroscience community in these psychological studies, which had until then focused on monkey brains. With the development of these technological innovations neuroscientists became interested in this type of research that combines sophisticated experimental paradigms from cognitive psychology with these new brain imaging techniques. Although the older technique of EEG had long been used to study the brain activity underlying selective attention by cognitive psychophysiologists, the ability of the newer techniques to actually measure precisely localized activity inside the brain generated renewed interest by a wider community of researchers. The results of these experiments have shown a broad agreement with the psychological, psychophysiological and the experiments performed on monkeys.
http://en.wikipedia.org/wiki/Attention

And are you suggesting electrode recording is somehow passe?

http://www.the-scientist.com/2009/10/1/57/1/
 
Last edited by a moderator:
  • #199
Lievo said:
So are you saying strong emergence is not computationnal?

Yes, by design Turing machines are isolated from global influences. Their internal states are simply informational, never meaningful. Are you unfamiliar with Searle's Chinese Room argument for example?

BTW, I don't accept the weak~strong dichotomy as it is being used here because it too hardwires in the very reductionist assumptions that are being challenged.

As I have said, the entire system is what emerges. So you have a development from the vaguely existing to the crisply existing. But if weak = vague, and strong = crisp, then perhaps you would be making a fair translation.
 
  • #200
Ken G said:
All the same, every single one of those is an assumption involved in a modeling approach, not a single one is an axiom (because they are not self-evident), and not a single one has convincing evidence to favor it. They are just choices made by the scientist to make progress.

I agree. An axiom is just an assumption formulated for the purposes of modelling. Calling it self-evident is just another way of saying I can't think of anything better at the moment.

Mathematicians of course have often believed they were accessing Platonic truth via pure reason. But scientists follow the pragmatist philosophy of CS Peirce.
 
  • #201
Yes, and if a convincing case can be argued for an equation along the lines of
M = Ps <--> Pr,
I think it would be even more interesting if the relation was something more akin to
P(M) = Ps <--> Pr,
where P means "the projection onto the physical." It would not be necessary for P to be invertible, so the physicalist claim that
M = P-1(Ps <--> Pr)
does not necessarily logically follow.

It is apparent that changes in the Ps <--> Pr interaction correlate with changes in M, and can be viewed as causal of changes in M because the detection of causality is one of the main properties of the P operation. However, if E signifies the evolution operator, involved in making some change, we still cannot say
E[M] = P-1(E[Ps <--> Pr]),
as that requires not only that P is invertible, but also that it commutes with E. Instead, what we can say is
E[P([M])] = E[Ps <--> Pr].
If we assert that EP = PE' as our definition of E', then we have
P(E'[M]) = E[Ps <--> Pr],
and this is the fundamental equation that systems-type neurologists study. But note we must wonder to what extent P is invertible, and to what extent P commutes with E. If neither holds, we have a particularly interesting situation.
 
  • #202
Ken G said:
Yes, and if a convincing case can be argued for an equation along the lines of M = Ps <--> Pr,

OK, you have lost me there. I'm not even sure if you are making a satirical argument. So you may have to explain it more fully :smile:.

A quick aside, to make a systems argument, one of the issues is having a systems notation. So I used set theory (as suggested by Salthe's hierarchy theory). But Peirce's sign of illation might be another route(http://www.math.uic.edu/~kauffman/TimeParadox.pdf).

Or because the = sign is always a statement about symmetry, and the systems view is about symmetry-breaking, then perhaps the notion of the reciprocal is the most accurate way to denote a process and its obverse - an invertible operation?

So Pr = 1/Ps could be another way of framing my argument. But then I was saying that Ps is the super-set, so not really invertible.

Anyway a logical notation that expresses the concepts is a live issue - and Louis Kauffman has highlighted the connections between Peirce, Nicod and Spencer-Brown. Another longer paper you might enjoy is http://www.math.uic.edu/~kauffman/Peirce.pdf.

Back to what you posted.

M = Ps <--> Pr - I translate this as the mind contains two constrasting views of causality, that are formed mutually as a symmetry-breaking of ignorance. But I think still the set theoretic view is more accurate.

My claim on P (models of physical causality) is that Ps = Pl + Pg. So systems causality is local construction plus global constraints.

Whereas Pr = Pl. So reductionist causality is based on just local construction.

However I then also claim that global constraints are still implied in Pr - they are just frozen and so can be left out of the modelling for simplicity's sake. Only the local construction has to be explicitly represented.

So Pr = Pl + not-Pg? Doesn't really work, does it?

But you raise an interesting issue just about the need for a formal notation that captures the ideas of systems causality. There is an abundance of notation to represent reductionist constructive arguments, but not really an equivalent for the systems view.
 
  • #203
Ken G said:
it is not at all obvious that this approach will ever be anything but a study of the neural correlates of mental states, i.e., the mental states may always be something different.
Sure it's logically sound. But could you think of a way to make a positive statement along this line?
 
  • #204
Pythagorean said:
Nascent mainstream is not self-contradictory. Nonlinear science is nascent to the mainstream, but it is mainstream (i.e. the work is published in well-known peer-reviewed journals). I don't know what you think my interpretation is; I responded to Q_Goest who (wrongly) put computational models and nonlinear science at odds.
I certainly agree your view is mainstream within the subfields that care about it. My point is that you're stretching the mainstream too far by applying it to neuroscience as a whole. The fact is most neuroscientists are not influenced by this view, because it does not have any impact in their day to day job. By the way, I guess what I don't like is that calling one's view mainstream is close to appeal to authority, which is hardly ever a good thing. When someone use that, usually that's because it doesn't have anything better to say. I know you can do better than that. :wink:
 
Last edited:
  • #205
Q Goest is the one that introduced mainstream as being scientifically meaningful.

I don't think most neuroscientists are influenced by the view, but I interpreted mainstream as pertaining to scientific peer review: the list of acceptable journals.
 
  • #206
In other words, these idea aren't beingrejected by the scientific community, and are passing peer review, even in the traditional neuroscience journals.

My traditional neuro advisor is happy to reach across the table and help us motivate our models biologically.
 
  • #207
Lievo said:
By the way, I guess what I don't like is that calling one's view mainstream is close to appeal to authority, which is hardly ever a good thing. When someone use that, usually that's because it doesn't have anything better to say. I know you can do better than that. :wink:

Yes, you have made your position clear on appeals to authority...

As a neuroscientist, I wish you both stop what is either poorly supported or wrong view about what is mainstream in neuroscience.

And why you continue to battle strawmen beats me.

It is only you who set this up as claims about being mainstream. You can check back to the post where I urged Q Goest to focus on the particular literature of neural receptive fields and top-down attentional effects if you like. You will see that it was hardly an appeal to authority but instead an appeal to consider the actual neuroscience.

https://www.physicsforums.com/showpost.php?p=3177690&postcount=36

Of course, you are now trying to say that the receptive field studies are not mainstream, or not new, or not influential, or something.

But all you have supplied as a source for that view is an appeal to authority. And a lame suggestion for a google search.

BTW, google scholar returns 142,000 hits for brain + fMRI and 164,000 for brain + receptive fields. What does that tell us?
 
  • #208
apeiron said:
You lose credibility with every post.
Your usual line when you got it wrong, it seems. :rolleyes:

apeiron said:
Check the top 10 papers from Nature Neuroscience over the past 10 years.
Small detail: this is Nature Reviews Neuroscience, which is not the same as Nature Neuroscience nor Nature.

Not so small detail: what you suggest is a bad methodology. Don't you know how to use pubmed?

I've check it anyway. Among these 10 papers, none but one discuss receptive fields, and only one reference is discussed. Among 114.

So again you said that "effects on selective attention on neural receptive fields has been one of the hottest areas of research for the last 20 years.", and again this is wrong. I can just wonder how you came to pretend otherwise despite it's wrong even using the data you emphasized.

apeiron said:
Your claim that scanning can't be used to research top-down effects is obvious nonsense.
Obvious strawman building, given I never made this claim. I'm better not: this my line of research. :biggrin:

apeiron said:
And are you suggesting electrode recording is somehow passe?
In a sense, yes. In a sense, no. The no part is that these kinds of data are still very interesting. The yes part is that data collection using electrode recording has been becoming too slow when compared to the 20-y-new neuroimaging technics in humans.
 
Last edited:
  • #209
Pythagorean said:
Q Goest is the one that introduced mainstream as being scientifically meaningful.

I don't think most neuroscientists are influenced by the view, but I interpreted mainstream as pertaining to scientific peer review: the list of acceptable journals.
Oh I just misunderstood you then. To me mainstream is "the common current thought of the majority". I've just missed the part where this definition was otherwise. My bad, sorry.
 
  • #210
apeiron said:
Yes, by design Turing machines are isolated from global influences.
If reality is computational, then all global influence are inside a TM. It may or not be the case, but if you assume the global environnement is not inside a Turing machine, there is no surprise you will conclude it's not computationnal.

apeiron said:
Their internal states are simply informational, never meaningful.
By definition?

apeiron said:
Are you unfamiliar with Searle's Chinese Room argument for example?
You actually give a sh.. about this argument?

apeiron said:
BTW, I don't accept the weak~strong dichotomy as it is being used here because it too hardwires in the very reductionist assumptions that are being challenged.

As I have said, the entire system is what emerges. So you have a development from the vaguely existing to the crisply existing. But if weak = vague, and strong = crisp, then perhaps you would be making a fair translation.
Is there any system that does not emerge, from these definitions?

apeiron said:
BTW, google scholar returns 142,000 hits for brain + fMRI and 164,000 for brain + receptive fields. What does that tell us?
*cough cough* I'm afraid it tells us you are not very familar with this engine. Brain + "receptive field" returns 88 700 hits. You see the trick?

You may notice that brain + MRI returns 1 130 000
 
Last edited:

Similar threads

  • General Discussion
6
Replies
190
Views
9K
Replies
14
Views
7K
Replies
2
Views
2K
Replies
12
Views
1K
  • Quantum Interpretations and Foundations
Replies
2
Views
744
  • General Discussion
6
Replies
199
Views
31K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
Replies
6
Views
730
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Quantum Interpretations and Foundations
7
Replies
213
Views
10K
Back
Top