Dynamical Neuroscience: Wiki Article Entry - Input Needed

  • Thread starter Thread starter Pythagorean
  • Start date Start date
  • Tags Tags
    Neuroscience
Click For Summary
The discussion focuses on the need for input on a newly created wiki article about dynamical neuroscience, highlighting its poor structure and clarity. Participants emphasize that the term "dynamical" should refer to mathematical representations of systems rather than artificial neural networks (ANNs), which they argue are irrelevant to the field. There is a call for a merger of the article with existing content on dynamical systems and a complete rewrite to eliminate personal biases and conjectures. The conversation also touches on the importance of including various scientific perspectives, such as biological physics and chemical kinetics, in understanding brain dynamics. Overall, there is a consensus that the article requires significant revisions to accurately reflect the complexities of dynamical neuroscience.
  • #61
atyy said:
Basically, there are not just 2 domains of description, but many.

On pragmatic grounds, yes, we are allowed to create as many modelling paradigms as we wish. Models are free inventions of the human mind, so there is no limit on how creative we can get, or how finely we wish to divide the cake.

But on fundamental grounds - which I thought we were debating - in fact the reductionist goal is to reduce everything in reality to a single common basis (a TOE), and the rejoinder from a systems perspective is in fact that instead we always seem to end up with dichotomies, two polar alternatives that seem have equal pull on our imaginations.

So should we reduce all neuroscience to dynamics, or to computation? Or should we unite the two by honouring their fundamental differences?

In theoretical biology, the systems view is understood. In theoretical neuroscience, not so much :smile:.

That does not mean there are not in fact multiple modelling paradigms. Just that a modelling fundamentalist would expect them to be arranged in a hierarchy so that they would still all "talk to each other". And then a reductionist would expect this hierarchy to work bottom-up - from some actual physical/material/dynamical TOE. While a systems thinker accepts that this hierarchy has in fact its two poles - so the semiotic/formal/computational is also fundamental in the way it anchors the other end of the spectrum.

In this way, we have both your "many models" as the stuff which fills the spectrum, and then the two fundamental poles needed to anchor that hierarchy.

The alternative view would be that of extremist social constructionism - models are just all human inventions, none with any more claim to fundamentality than any others. We would have a patternless mosaic, a space of modelling fragments each with local application but no global coherence.

So be careful what you wish for!
 
Biology news on Phys.org
  • #62
apeiron said:
That does not mean there are not in fact multiple modelling paradigms. Just that a modelling fundamentalist would expect them to be arranged in a hierarchy so that they would still all "talk to each other". And then a reductionist would expect this hierarchy to work bottom-up - from some actual physical/material/dynamical TOE. While a systems thinker accepts that this hierarchy has in fact its two poles - so the semiotic/formal/computational is also fundamental in the way it anchors the other end of the spectrum.

In this way, we have both your "many models" as the stuff which fills the spectrum, and then the two fundamental poles needed to anchor that hierarchy.

The alternative view would be that of extremist social constructionism - models are just all human inventions, none with any more claim to fundamentality than any others. We would have a patternless mosaic, a space of modelling fragments each with local application but no global coherence.

I was hoping for the last view, but also with global coherence.
 
  • #63
atyy said:
I was hoping for the last view, but also with global coherence.

OK, so what is the nature of that coherence exactly?

Both the conventional reductionist and the systems view would expect coherence from a hierarchical arrangement of models that all "talk to each other" across their levels.

That in itself implies a common language - and information theory is emerging as that standard coin of exchange between theory domains. (Whereas more traditionally, a scientific coherence was claimed because "everything was made of the same kind of ultimate stuff" - science being a materialistic discourse.)

So you have the differentiation of models into levels of a hierarchy, and the integration of these models through some common language, some standard unit of exchange. How it works out in all its gory details is still debatable, but the general model of how global coherence would be achieved by the scientific enterprise seems both explicit and widely accepted. Witness the angry rejection of PoMo commentaries in the Philosophy of Science.

So if you are not taking this hierarchical approach to a universe of models, then exactly how do you imagine a coherence being achieved?

And further, are you claiming that the current patchwork of models is not actually connected in this fashion - if albeit loosely and imperfectly?
 
  • #64
atyy said:
Hmmm, that's a very narrow definition of dynamical systems theory. It's morally ok in some sense, since Poincare is rightly regarded as the father of the topological approach to differentiable dynamics. While acknowledging you have a point, it does boggle my mind that you could exlcude Newton. Even KAM theory had its roots in the Hamilton-Jacobi formulation of mechanics, and whether action-angle variables (invariant tori in the modern language) exist.

We can agree on all kinds of observations, but where we divide and categorize sets of observations is where we have conflicts ("It's QM", "no, it's CM!") or ("it's blue", "no, it's indigo!").

I don't consider Einstein a quantum physicist either; I think Newton and Einstein are both unique cases. They are pretty much our (i.e. society's) ideal vision of a scientist as you really can't box them up as this or that. Of course, I feel the same way about people like Poincare and Erdos :) they're just not as popular to the general public.

I was really taking the particle physics point of view, as Rhody says!

Well, I guess to me, symbolic dynamics means you take a particular state of the whole system of particles to be an emergent qualitative state. And while the dynamical system really has infinite states, you could (as an example) partition the phase volume into two and call one state "1" and the other state "0".

But I have no experience actually handling Markov partitions, so this is just my impression from reading literature that's full of cumbersome jargon.
 
  • #65
apeiron said:
OK, so what is the nature of that coherence exactly?

Both the conventional reductionist and the systems view would expect coherence from a hierarchical arrangement of models that all "talk to each other" across their levels.

That in itself implies a common language - and information theory is emerging as that standard coin of exchange between theory domains. (Whereas more traditionally, a scientific coherence was claimed because "everything was made of the same kind of ultimate stuff" - science being a materialistic discourse.)

So you have the differentiation of models into levels of a hierarchy, and the integration of these models through some common language, some standard unit of exchange. How it works out in all its gory details is still debatable, but the general model of how global coherence would be achieved by the scientific enterprise seems both explicit and widely accepted. Witness the angry rejection of PoMo commentaries in the Philosophy of Science.

So if you are not taking this hierarchical approach to a universe of models, then exactly how do you imagine a coherence being achieved?

And further, are you claiming that the current patchwork of models is not actually connected in this fashion - if albeit loosely and imperfectly?

Well, what I'm saying is morally related to hierarchical thinking - but with no model being fundamental, and no hierarchy - more a patchwork of coordinate charts - but even then not quite since there is no standard unit of exchange (except the human mind).
 
  • #66
apeiron said:
This is an informational way of modelling dynamical processes. So not what I am talking about.

That is a more of a distracting coincidence, I was actually referring to the subjectivity allowed of the investigator to define the partitions of the system himself. The investigator is free to implement a hierarchical approach... and for particular kinds of systems (at least) if we define the partition around the bifurcations of the system, we cannot even avoid adhering to heirarchy (the bifurcation branches) and its relationship to scale (the bifurcation parameter).
 
  • #67
atyy said:
Well, what I'm saying is morally related to hierarchical thinking - but with no model being fundamental, and no hierarchy - more a patchwork of coordinate charts - but even then not quite since there is no standard unit of exchange (except the human mind).

But is this your goal, or just a description of best likely outcome? We were talking about goals (even you expressed coherence as a hope of yours).

And your comment about there being no unit of exchange apart from the human mind is baffling. Units of exchange are what a modelling mind would create, not what they would "be".

It might help if you could supply references to your brand of epistemology here.

For instance, an example of the adoption of information as the new universal coin of modelling is...http://en.wikipedia.org/wiki/Digital_physics

Well, actually, that is an example of people jumping from epistemology to ontology. They don't just believe physics can be modeled in the standard language of information theory, they claim it actually is just all information!

So this is an illustration of the perils of orthodox reductionism - going overboard in just one direction. But it also shows that the other pole of description exists even at the "lowest level" of material physics.

There is a battle of views going on that is framed dichotomistically - substance vs form, matter vs information.

The strings/TOE debate is another example. Shall we model reality in terms of its fundamental degrees of freedom or its fundamental constraints? The expectation of the TOE camp is that degrees of freedom are infinite, but only one form of constraint (the string theory that works) is actually possible. So then everything (even the fundamental constants, fingers crossed) will be "explained by mathematics".

So again, no quarrel that science is pragmatically formed by a ragged patchwork of modelling domains. But at the same time, the same basic fundamental division infects/unites science at its every level.

Charts can create their own co-ordinates. But generally they are in fact all trying to orientate themselves along the same general compass setting that points north to form/information, and south to sustance/matter.

Neuroscience is just another example. And the best neuroscience - like Grossberg with his plasticity~stability dilemma, or Friston with his Bayesian brain - is focused on finding the appropriate balance between the informational and material view.
 
  • #68
Pythagorean said:
That is a more of a distracting coincidence, I was actually referring to the subjectivity allowed of the investigator to define the partitions of the system himself. The investigator is free to implement a hierarchical approach... and for particular kinds of systems (at least) if we define the partition around the bifurcations of the system, we cannot even avoid adhering to heirarchy (the bifurcation branches) and its relationship to scale (the bifurcation parameter).

Again, you are making my point for me. If it is a subjective work-around, it is not an objective consequence of the model.

Yes, we can get away with doing things simply - either pretending reality is just dynamics, or just computation. We can rely on our informal, subjective, knowledge to avoid misusing models based on those reductionist assumptions.

But that is not the same thing as having a formal basis to a domain of knowledge. It does not address the issue of what is fundamental.

You can then respond, the fundamental doesn't actually matter if we can get by on pragmatics. And again, for some people - many probably - this is indeed enough to satisfy their personal interests.

But for science itself, it does matter. The enterprise of science does have to ensure that all the local domains of modelling connect up objectively - even just pragmatically! - somehow. And a hierarchy of modelling is the way this is being done. Which in turn means extracting the fundamental co-ordinates of this hierarchy (so as to give all the specialised sub-domains some bearings to steer by).
 
  • #69
I think you misunderstand; the point is not to isolate dynamics or computation. The point is that you must already integrate them in the first place. You can do it consciously or you can do it by default (as you hinted at yourself in your reply to atyy).

You can't model everything at once without losing specificity and you can't specify without losing generality. So the investigator has to choose the regime that is appropriate to his question. It's not a "subjective workaround". The subjective part is that the investigator chooses the question to ask, and the partitions can be divided differently for different questions (but all the same underlying system).

From there, you can use any modeling paradigm you wish with the abstracted partitions. For instance, you can treat each partitions as vertices on a graph, and translate dynamical events to the edges connecting the vertices and take a standard connectionist approach.
 
  • #70
Pythagorean said:
I think you misunderstand; the point is not to isolate dynamics or computation.

Sorry, I didn't realize you are probably referring to hidden markov modelling here.

And yes, that would indeed be a hybrid approach because the model acts as an informational constraint on the uncertainty of the world, the dynamical degrees of freedom.

But from dim memory - its been 20 years - HMM approaches are pretty low-powered in practice. And they seemed crude rather than elegant in principle.

If you have references to where they are proving to be now important in theoretical neuroscience, that would be interesting.

It is also a fair point that in any domain of modelling, you need to trade-off generality and specificity. But the question was, what does that look like in neuroscience as a whole, or science as a whole?

And in any case, you are still arguing for a dichotomy in your co-ordinate basis. You are re-stating the fact that there needs to be a compass bearing that points north to generality (global form) and south to specificity (local substance).

Unless you can point out the two complementary directions for your domain of modelling, how do you make any definite specificity~generality trade-off?
 
  • #71
apeiron said:
But from dim memory - its been 20 years - HMM approaches are pretty low-powered in practice. And they seemed crude rather than elegant in principle.

If you have references to where they are proving to be now important in theoretical neuroscience, that would be interesting.

Most successful HMM models are reductionist (receptor-ligand kinetics and protein-kinase interactions). The are quite standard in biophysics.

On the larger scale, I did not read this in a paper or anything, it is more my imagination that recognizes the room there for hybrid modeling. The big problem is how vast the phase space of large dimensional systems; Finding a regime in your model that fits a particular disease is a lot like finding a survivor in a forest. There's many more places to look then a single person might have time for in his lifetime.

It is also a fair point that in any domain of modelling, you need to trade-off generality and specificity. But the question was, what does that look like in neuroscience as a whole, or science as a whole?

And in any case, you are still arguing for a dichotomy in your co-ordinate basis. You are re-stating the fact that there needs to be a compass bearing that points north to generality (global form) and south to specificity (local substance).

Unless you can point out the two complementary directions for your domain of modelling, how do you make any definite specificity~generality trade-off?

Yes, my response was not meant to be in conflict with dichotomization. I was trying to show the common ground.

The specificity~generality trade-off comes down to scale. Long-term processes vs. short term processes, or global (long distance) processes vs. local (short distance) proesses.

If your scale is your bifurcation parameter, then it seems quite natural (to me) to partition your system by the bifurcations (the qualitative branching of emergent states; the transition where your system flips from one qualitative state to the other, even though the individual particles are all following the same fundamental laws and may even look like just random noise in a limited dimension slice of the system.)
 
  • #72
Pythagorean said:
If your scale is your bifurcation parameter, then it seems quite natural (to me) to partition your system by the bifurcations (the qualitative branching of emergent states; the transition where your system flips from one qualitative state to the other, even though the individual particles are all following the same fundamental laws and may even look like just random noise in a limited dimension slice of the system.)

OK, I agree that both our models and even nature makes these trade-offs. So a prominent example would be the neural code issue.

The underpinning of what happens at synapses, or at axon hillocks, is dynamical/material. No question. But at what point does this specificity of material detail get ignored/filtered away by the generality of informational processes in the brain? Or does it in fact get filtered away at all?

These are the kinds of foundational issues that plague theory. And so we have to confront them head-on.

Seeking out bifurcations and similar sharp transitions in dynamics sounds like the right thing to do. But in the end - when it comes to the kind of systems-level, whole brain, neuroscience I am interested in - what does it buy you?

Yes, it may be most or all of what you need to do biophysical modelling of neurons. But is it then any use for modelling functional networks of neurons that are now modelling the world?

The people who I knew that were trying a dynamicist approach to functional modelling have ended up doing something else - hybrid approaches like the Bayesian brain. Or else are fading into forgotten history.
 
  • #73
apeiron said:
Seeking out bifurcations and similar sharp transitions in dynamics sounds like the right thing to do. But in the end - when it comes to the kind of systems-level, whole brain, neuroscience I am interested in - what does it buy you?

Yes, it may be most or all of what you need to do biophysical modelling of neurons. But is it then any use for modelling functional networks of neurons that are now modelling the world

The people who I knew that were trying a dynamicist approach to functional modelling have ended up doing something else - hybrid approaches like the Bayesian brain. Or else are fading into forgotten history.

It is the linking of emergent (global) properties to local events that it buys you. Yes it can be used for meta-modeling, but it is extremely exhaustive on resources (computational power, energy, time) to do it at the level the brain actually does.

A 'basin of attraction' study will give you an idea of where to go in control theory for the system and then you can start representing and modeling... but you have to recognize that there is degeneracy in the system. Two completely different reductionist regimes can lead to the same qualitative, emergent outcome. But they will not necessarily dynamically transition between such states the same way (i.e. they would have different exits in the markov model) so they are actually not actually the same.

This is why there are reduced ANN's that only represent neurons in the two states (1 or 0). Because they are cheap on resources. But this is too extreme in the other direction now and you are missing out on interference and integrative effects of the passive currents that become important to diversity (i.e. optimization in the computation view).

Bayesian brain is of course, another empirically sound idea. If you are studying long-term associative learning, especially, it's the most obvious choice. I do not know exactly how it is implemented by my imagination runs with ideas of how I'd do it (maybe throw a squal of randomized initial conditions to my system and find the Bayesian relationships between initial conditions and the emergent properties. I.e., stimulus and representation).

Sadly, this is outside the scope of my master's thesis, but I am actually quite interested in taking my models to the next level for my PhD thesis.
 
  • #74
Pythagorean said:
This is why there are reduced ANN's that only represent neurons in the two states (1 or 0). Because they are cheap on resources. But this is too extreme in the other direction now and you are missing out on interference and integrative effects of the passive currents that become important to diversity (i.e. optimization in the computation view).

Yep, again this is what we find it boils down to. Should our ontological basis be digital or analog? Or in fact, is it really all about the productive interaction of these "two worlds"?

Now we can attempt to solve that issue for every individual domain of inquiry. And that makes it largely a pragmatic question. Or we can attempt to solve it for the entire domain of science - discover a general law of nature that constrains all models in a "first principles" fashion.

That is the prize on offer. And we can at least assume that any functional brain system does manage to optimise these complementary imperatives. So nature could give us the big clue.

Grossberg is an example of this kind of big picture thinking. He looked for generalities (like the plasticity~stability dilemma) and then tried to cash in with a whole bundle of specific applications. But he may ultimately have been too computational.

Scott Kelso is another interesting case of a dynamicist seeking the general story to frame all the particular stories. But I think he erred too much on the dynamicist side.

Karl Friston is someone who from the start (being a younger generation) understood systems are driven by their twin imperatives, and the secret is to model the optimisation principle that lurks in their interaction.
 
  • #75
apeiron said:
And your comment about there being no unit of exchange apart from the human mind is baffling. Units of exchange are what a modelling mind would create, not what they would "be".

Yes, I was being sloppy. The above rephrasing is an acceptable interpretation of my short hand.

I think I'm understanding you very poorly. I'm basically expressing what seems to me plain common sense. I think you are trying to formalize something grander.

Let's take Friston's work as a concrete example, since you posted a link some time ago which was free and I made a quick read of. If I remember correctly, his basic point was that the input-output relationship of the brain can be described as extremizing some functional. Why is that dichotomous, and what is it dichotomous to?
 
  • #76
atyy said:
Let's take Friston's work as a concrete example, since you posted a link some time ago which was free and I made a quick read of. If I remember correctly, his basic point was that the input-output relationship of the brain can be described as extremizing some functional. Why is that dichotomous, and what is it dichotomous to?

The general dichotomy employed is the entropy~negentropy one of thermodynamics. So sensory surprise is treated as disorder, and sensory anticipation as order. What is optimised over the long-term is the minimisation of surprise, the maximisation of anticipation.

This is achieved in practice by the interaction between two activities...

Agents can suppress free energy by changing the two things it depends on: they can change sensory input by acting on the world or they can change their recognition density by changing their internal states. This distinction maps nicely onto action and perception.

Grossberg's ART put it rather more simply as the interaction between two levels of memory - short term and long term.

Friston in similar fashion maps the essential hierarchical interaction to actual brain architecture...

It shows the putative cells of origin of forward driving connections that convey prediction error (grey arrows) from a lower area (for example, the lateral geniculate nucleus) to a higher area (for example, V1), and nonlinear backward connections (black arrows) that construct predictions41. These predictions try to explain away prediction error in lower levels. In this scheme, the sources of forward and backward connections are superficial and deep pyramidal cells (upper and lower triangles), respectively, where state units are black and error units are grey. The equations represent a gradient descent on free energy using the generative model below. The two upper equations describe the formation of prediction error encoded by error units, and the two lower equations represent recognition dynamics, using a gradient descent on free energy.

The paper - http://www.fil.ion.ucl.ac.uk/~karl/The free-energy principle A unified brain theory.pdf - is a good example here because it tries to unite many models under the one generalised approach. So it weaves in optimal control theory, DST, and other stuff.

There are explicit appeals to dynamical concepts, like...

surprise here relates not just to the current state, which cannot be changed, but also to movement from one state to another, which can change. This motion can be complicated
and itinerant (wandering) provided that it revisits a small set of states, called a global random attractor, that are compatible with survival (for example, driving a car within a small margin of error). It is this motion that the free-energy principle optimizes.

And many sub-dichotomies are identified - such as the complementary nature of reward and error, or exploitation and exploration.

So generally, Friston is seeking two poles in interaction. And then the equilibrium balance point that optimises that interaction. Moreover, the interaction is hierarchical, with bottom-up degrees of freedom meeting top-down constraints.

On the question of whether the brain is really dynamical, or really computational, Friston's answer is clearly that it is a mix. And he tries to tie together the many earlier attempts at mixed models - like synergetics, autopoiesis, adaptive resonance, neural darwinism, optimal control theory, Hebbian cell assemblies, infomax, predictive coding, etc, etc - into one thermodynamics-based paradigm.

So Friston seeks to place a 50-plus year history of neuroscience models, which are all broadly dancing around the same anticipatory and hierarchical processing approach, on a shared footing, the free-energy principle, which in turn places neuroscience on the firm foundation of a branch of physics.

The free-energy principle is "the minimization of the free energy of sensations and the representation of their causes". So the dichotomy is the division into sensations and their causes - our impressions and our ideas. And the optimisation is about striking the balance between these two kinds of effort, so that we are expending the least possible effort in mentally modelling the world.
 
  • #77
apeiron, thanks for the long write-up. Let me ask questions in little bits to see if I understand you correctly.

Is the main point that free energy minimization is essentially maximization of entropy subject to the constraint of constant of energy, so the two poles are entropy and energy, with both poles equally fundamental?
 
  • #78
apeiron said:
The general dichotomy employed is the entropy~negentropy one of thermodynamics. So sensory surprise is treated as disorder, and sensory anticipation as order. What is optimised over the long-term is the minimisation of surprise, the maximisation of anticipation.

This is achieved in practice by the interaction between two activities...
Grossberg's ART put it rather more simply as the interaction between two levels of memory - short term and long term.

Friston in similar fashion maps the essential hierarchical interaction to actual brain architecture...
The paper - http://www.fil.ion.ucl.ac.uk/~karl/The free-energy principle A unified brain theory.pdf - is a good example here because it tries to unite many models under the one generalised approach. So it weaves in optimal control theory, DST, and other stuff.

I don't understand any of the paper and my following input is likely misled but it looks very interesting.. One thing that strikes me is that it looks like neuroscience is following a path of a "TOE" equivalent; By that I mean organizing everything under a more general pattern of processes.

Thoughts?
 
  • #79
Nano-Passion said:
I don't understand any of the paper and my following input is likely misled but it looks very interesting.. One thing that strikes me is that it looks like neuroscience is following a path of a "TOE" equivalent; By that I mean organizing everything under a more general pattern of processes.

Thoughts?

To a purist, a TOE might mean that everything in the universe can be describe by one equation (even emotions and consciousness). To others, it simply means unifying the specific case of gravity and QM.

TOE can mean different things, but yeah, the idea is toward generalization. I think that's the general direction of any theoretical approach.
 
  • #80
atyy said:
Is the main point that free energy minimization is essentially maximization of entropy subject to the constraint of constant of energy, so the two poles are entropy and energy, with both poles equally fundamental?

Sort of. A first clarification may be to swap the dichotomy of maximal~minimal for something more appropriate, like extremal~optimised. Otherwise the language tends to get in the way - as in dissipative structure theory where people can't decide whether they are talking about a maximum or minimum entropy production principle.

So the underlying theory, from a systems perspective is that in any situation you have the two extremal poles that separate the subject under discussion. The differentiation step. Then you have the complementary process of the synergistic mixing or integration, which is the optimisation action.

In terms of the thermodynamics of living/mindful structures - which is what we are talking about here with Friston - the opposing extremes would be complete certainty and complete uncertainty. Then the optimisation is the search for a productive balance of the two, over the spatiotemporal scales relevant to an organism. So for instance, we both want to know things for sure in a "right now" way and a "long term" way. Reducing uncertainty for one scale could increase it for the other. Therefore some kind of balance needs to be struck.

Also, uncertainty is about degrees of freedom still yet to be disposed. You can't teach an old dog new tricks, as they say. So that is another reason why a balance would want to be struck between a capacity to learn, to be creative due to uncertainty, and to be able to impose a certainty on thought and perception.

You can see I'm talking about all this in information theoretic terms. And that is the advantage of thermodynamics - its straddles the divide pretty well. So the usable energy~dissipated energy distinction in material reality can be formally equated to a certainty~uncertainty distinction in our subjective view. The maths of one can be used to describe the maths of the other.

And the relationship goes even deeper if you follow the infodynamics approach to dissipative structure because information is being used to regulate dynamics. Minds have a reason to exist - it is to control their worlds.

Anyway, the thing to get perhaps is that standard thermodynamics seems to say that the goal of reality is to entropify gradients. If a source of energy/order can be dissipated to waste heat/disorder, then it must be.

This does seem like a simple extremum principle - thou shalt maximise disorder! But it also hides the larger systems story. There have to be the two extremes to have a gradient (an initial state of order, a final state of disorder). And then the disordering has to actually happen in time. So there is an optimal rate for the process - which is the fastest possible perhaps, but as we can tell from the long history of our universe, not actually instantaneous.

Then from this baseline simpliciity, complexity can arise. Any region of the universe that is can accelerate the entropification rate can also afford a complementary measure of deceleration. Or in other words, life can arise as order (negentropy) because it is increasing the local disorder (entropy). And the way it does this is by capturing energy and turning it into stored information - the physical structure that is a body, its genes, its neural circuits.

So dissipative structure dips into the flow of entropification to build an informational self that can exist because it raises the general rate of entropification.

Now, there is a lifecycle balance to be optimised here as said. A living system is a mix of it genes and its metabolism, its information and its dynamics. It needs to be able to regulate its world, but there is a danger in trying to over-regulate.

In theoretical biology, Stan Salthe models this dilemma as the canonical lifecyle of immaturity, maturity, senescence. A gastrula is immature - lightly constrained, fast growing/entropifying, still many degrees of freedom open to it. A mature organism has a more structured balance - it no longer grows, but still repairs, still learns. Then a senescent individual is overloaded by informational constraints - it is well-adapted to its world, but in a now rigid and brittle fashion. It can no longer respond to sharp environmental perturbation and is subject to catastrophic collapse and material recyling.

Sorry, I seem to have strayed a long way from a direct answer. But I am trying to stress that there is first a rich thermodynamical basis now to life and mind science. And this base is natually a hybrid discourse as it can be talked about in both material and informational terms.

The complexity of life/mind is that it actually physically connects the material and the informational in a literal sense. It transcribes energy/matter into informational structure or memories - which are used in turn to control that realm of energy/matter (for the purposes of the second law of thermodynamics).

And then, all this fits into a systems perspective where you expect to find a causal organisation of exactly this type. You expect to find a fundamental division into complementary concepts, a differentiation that creates a spectrum of possibility, that then gets integrated to create a state of actuality.

So Friston has recognised that many of the most promising neuroscience models share something in common. Life and mind exist by playing the game of accelerating entropification. And they are by definition themselves a matching deceleration of that universal baseline entropification rate. But that can only happen by putting aside energy some place the rest of the universe cannot get at it - storing it as information, the configurational entropy of genes, neurons, cells, bodies, that can then act as material constraints on the world (via everything from the production of enzymes to the choice to reach out and stuff food in your mouth).

Now that in turns sets up the game of storing the optimal amount of information. Too much and scenescence beckons. Too little, and immature behaviour results. There has to be a balance that is optimised continually, dynamically.

Friston boils this down to the dichotomy of expectations vs surprise. And over a modestly long run (which in fact is just dealing with the mature phase of an organism's life cycle) the goal is to do the most with the least. To reduce uncertainty as much as is useful (rather than as much as is possible) while storing the least amount of information (ie: creating the most generalised anticipations).
 
  • #81
OK, I'm fine with using "extremize". And yes, I agree within the theory the two poles are both fundamental.

When I was saying no theory is fundamental, I simply meant the simple common sense that we have no TOE. For example, Friston's theory uses probability densities, which means he either requires ensembles, in which case individual behaviour is not predicted, or if it is a theory of individual behaviour, it requires ergodicity, which we know does not hold in general. Similarly, information theory requires ergodicity, which does not hold in general. Even our theory that has to most right to be called fundamental - the Einstein-Hilbert action and the standard model - is not fundamental, since it isn't a UV complete quantum theory.

So I can agree with two poles being fundamental, as well as not fundamental.
 
  • #82
atyy said:
When I was saying no theory is fundamental, I simply meant the simple common sense that we have no TOE.

OK, and I am saying if we did have a TOE, it would have two fundamental poles. Or at least be self-dual - internalising its essential dichotomy :smile:
 
  • #83
apeiron said:
OK, and I am saying if we did have a TOE, it would have two fundamental poles. Or at least be self-dual - internalising its essential dichotomy :smile:

:smile: You must be a big string theory fan! Or does that have too many dualities? :smile:
 
  • #84
apeiron, would you consider Friston's ideas to be related to variational methods in statistical models, eg. Yedidia et al's work linking Pearl's belief propagation to the Bethe approximation? Pearl's algorithm is like a dynamical or control system since it proceeds stepwise in time. The Bethe approximation is an explicit approximation to a thermodynamical free energy. There's earlier work too reviewed IIRC by Ghahramani and Jordan.
 
  • #85
atyy said:
apeiron, would you consider Friston's ideas to be related to variational methods in statistical models, eg. Yedidia et al's work linking Pearl's belief propagation to the Bethe approximation? Pearl's algorithm is like a dynamical or control system since it proceeds stepwise in time. The Bethe approximation is an explicit approximation to a thermodynamical free energy. There's earlier work too reviewed IIRC by Ghahramani and Jordan.

I don't see this as the same because Friston is talking about learning networks and these are just pattern matching ones I believe. One predicts its inputs and so is optimised for its forward modelling, the other finds optimal matches when driven by some set of inputs. The free energy principle in the first refers to the level of prediction error, in the second, it just relates to the efficient creation of the prediction.

So while free energy concepts could be invoked for both cases, only the former has the kind of biological realism that interests me. Although pattern matching networks could be considered to be embedded as part of the larger machinery of a generative network.

This paper might explain Friston's approach in better detail.

http://www.fil.ion.ucl.ac.uk/~karl/The%20free-energy%20principle%20a%20rough%20guide%20to%20the%20brain.pdf

For instance, note how there is a dichotomy of error units~state units built into the circuitry so as to have messages propagating both bottom-up and top-down.

Under hierarchical models, error-units receive messages from the states in the same level and the level above; whereas state-units are driven by error-units in the same level and the level below... This scheme suggests that connections between error and state-units are reciprocal; the only connections that link levels are forward connections conveying prediction error to state-units and reciprocal backward connections that mediate predictions.
 
Last edited by a moderator:
  • #86
Yes, the Yedidia et al paper is only about formalism. Just like Newton's second law, which is not applicable to any real system until one specifies the form of F and operational meaning of the variables. It's more related to my interest about the relationship between two formalisms: how far can dynamical or control systems can be viewed in variational senses? The two textbook physics examples are the Lagrangian formulation of mechanics (which can be extended to some dissipative systems), and the relationship between kinetic theory and statistical mechanics.

Can I also get Pythagorean's view whether Yedidia et al's work counts as dynamical systems theory in the Poincare sense - ie. view Pearl's "Belief Propagation" algorithm as a dynamical system since it proceeds stepwise in time, and view the study of its "fixed points" (their terminology!) as being analogous to Poincare's concern for phase space topology?
 
Last edited:
  • #87
It doesn't appear to be dynamical systems theory in any straightforward way. It seems mostly probabilistic and not so much mechanistic. I see a lot of graph theory and statistics. I would say this is much more on the computational end of the spectrum.
 
  • #88
Pythagorean said:
It doesn't appear to be dynamical systems theory in any straightforward way. It seems mostly probabilistic and not so much mechanistic. I see a lot of graph theory and statistics. I would say this is much more on the computational end of the spectrum.

Hmm, but would you count an algorithm as a dynamical system since it proceeds stepwise in time with each step determined by the previous step? And consider questions about convergence as questions about fixed points? These seem closely related unless only continuous space and time are allowed in your view.
 
Last edited:
  • #89
atyy said:
Hmm, but would you count an algorithm as a dynamical system since it proceeds stepwise in time with each step determined by the previous step?

I would argue that algorithms and equations are both examples of timeless modelling - in the formal computational description, rate no longer matters. And meanwhile back in reality, time in fact matters. It creates the critical dependencies.

So for example in generative neural networks like Friston's, there is one "algorithm" in the notion of the "optimisation of adaptedness", yet in a real system, this adaptation of state has to occur over multiple timescales. The short-term activity is nested within the long-term. That is why it is a learning system - it is a hierarchy of levels of memory, all doing the same thing, but across a spread of spatiotemporal scales.

Now dynamical approaches arose by allowing for feedback and iteration. So the algorithm - as an essentially timeless seed or kernel process - is then allowed to play out in time to generate some larger pattern. A fractal would be a good example. A timeless algorithm gets played out over all scales eventually (though it would take infinite time to fill in the full pattern).

However this is still a very partial inclusion of time into the modelling. It is not the kind of complex dynamics we might get from something like an ecosystem where organisms affect their environment, while the environment in turn constrains those organisms. Here we have multiple spatiotemporal scales of action in interaction, rather than merely a kernel process unfolding into an unchanging void. Kind of like the difference between Newtonian and relativistic mechanics when it comes to modelling dynamics.

So there is simple dynamics, where the parameters are fixed, and complex dynamics where the parameters are themselves dynamic - developing or evolving.
 
  • #90
The following is not a dynamical systems approach, persay, but are methods generally accepted to be necessary for confining the solution space of a dynamical system.

The following book explains metahueristic approaches (in general, not just biology). I find two approaches particularly interesting: exploration and exploitation. I think designing a good AI would require utilizing both, and additionally, the AI program "knowing" when to switch between exploration and exploitation.

Metaheuristics: From Design to Implementation
El-Ghazali Talbi
ISBN: 978-0-470-27858-1

Genetic/evolutionary algorithms are an example of a heuristic approach that steals ideas from nature, particularly the implementation of a stochastic optimization.

atyy said:
Hmm, but would you count an algorithm as a dynamical system since it proceeds stepwise in time with each step determined by the previous step? And consider questions about convergence as questions about fixed points? These seem closely related unless only continuous space and time are allowed in your view.

I agree that a mapping system is still a dynamic system, I guess I just don't see the the mapping equation explicitly and I wouldn't know how to analyze this system, but this is probably due to my ignorance. Thinking about metaheuristics though, I kind of arrived at some kind of intuition about the mapping in a dynamical sense.
 
Last edited:

Similar threads

Replies
8
Views
5K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
795
Replies
32
Views
8K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
6
Views
3K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 23 ·
Replies
23
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 2 ·
Replies
2
Views
4K