Dynamical Neuroscience: Wiki Article Entry - Input Needed

  • Thread starter Pythagorean
  • Start date
  • Tags
    Neuroscience
In summary, the article seems to be oriented toward ANN-based AI, and does not have much relevance to dynamical neuroscience. I recommend that it be merged with another article on dynamical systems and completely rewritten.
  • #71
apeiron said:
But from dim memory - its been 20 years - HMM approaches are pretty low-powered in practice. And they seemed crude rather than elegant in principle.

If you have references to where they are proving to be now important in theoretical neuroscience, that would be interesting.

Most successful HMM models are reductionist (receptor-ligand kinetics and protein-kinase interactions). The are quite standard in biophysics.

On the larger scale, I did not read this in a paper or anything, it is more my imagination that recognizes the room there for hybrid modeling. The big problem is how vast the phase space of large dimensional systems; Finding a regime in your model that fits a particular disease is a lot like finding a survivor in a forest. There's many more places to look then a single person might have time for in his lifetime.

It is also a fair point that in any domain of modelling, you need to trade-off generality and specificity. But the question was, what does that look like in neuroscience as a whole, or science as a whole?

And in any case, you are still arguing for a dichotomy in your co-ordinate basis. You are re-stating the fact that there needs to be a compass bearing that points north to generality (global form) and south to specificity (local substance).

Unless you can point out the two complementary directions for your domain of modelling, how do you make any definite specificity~generality trade-off?

Yes, my response was not meant to be in conflict with dichotomization. I was trying to show the common ground.

The specificity~generality trade-off comes down to scale. Long-term processes vs. short term processes, or global (long distance) processes vs. local (short distance) proesses.

If your scale is your bifurcation parameter, then it seems quite natural (to me) to partition your system by the bifurcations (the qualitative branching of emergent states; the transition where your system flips from one qualitative state to the other, even though the individual particles are all following the same fundamental laws and may even look like just random noise in a limited dimension slice of the system.)
 
Biology news on Phys.org
  • #72
Pythagorean said:
If your scale is your bifurcation parameter, then it seems quite natural (to me) to partition your system by the bifurcations (the qualitative branching of emergent states; the transition where your system flips from one qualitative state to the other, even though the individual particles are all following the same fundamental laws and may even look like just random noise in a limited dimension slice of the system.)

OK, I agree that both our models and even nature makes these trade-offs. So a prominent example would be the neural code issue.

The underpinning of what happens at synapses, or at axon hillocks, is dynamical/material. No question. But at what point does this specificity of material detail get ignored/filtered away by the generality of informational processes in the brain? Or does it in fact get filtered away at all?

These are the kinds of foundational issues that plague theory. And so we have to confront them head-on.

Seeking out bifurcations and similar sharp transitions in dynamics sounds like the right thing to do. But in the end - when it comes to the kind of systems-level, whole brain, neuroscience I am interested in - what does it buy you?

Yes, it may be most or all of what you need to do biophysical modelling of neurons. But is it then any use for modelling functional networks of neurons that are now modelling the world?

The people who I knew that were trying a dynamicist approach to functional modelling have ended up doing something else - hybrid approaches like the Bayesian brain. Or else are fading into forgotten history.
 
  • #73
apeiron said:
Seeking out bifurcations and similar sharp transitions in dynamics sounds like the right thing to do. But in the end - when it comes to the kind of systems-level, whole brain, neuroscience I am interested in - what does it buy you?

Yes, it may be most or all of what you need to do biophysical modelling of neurons. But is it then any use for modelling functional networks of neurons that are now modelling the world

The people who I knew that were trying a dynamicist approach to functional modelling have ended up doing something else - hybrid approaches like the Bayesian brain. Or else are fading into forgotten history.

It is the linking of emergent (global) properties to local events that it buys you. Yes it can be used for meta-modeling, but it is extremely exhaustive on resources (computational power, energy, time) to do it at the level the brain actually does.

A 'basin of attraction' study will give you an idea of where to go in control theory for the system and then you can start representing and modeling... but you have to recognize that there is degeneracy in the system. Two completely different reductionist regimes can lead to the same qualitative, emergent outcome. But they will not necessarily dynamically transition between such states the same way (i.e. they would have different exits in the markov model) so they are actually not actually the same.

This is why there are reduced ANN's that only represent neurons in the two states (1 or 0). Because they are cheap on resources. But this is too extreme in the other direction now and you are missing out on interference and integrative effects of the passive currents that become important to diversity (i.e. optimization in the computation view).

Bayesian brain is of course, another empirically sound idea. If you are studying long-term associative learning, especially, it's the most obvious choice. I do not know exactly how it is implemented by my imagination runs with ideas of how I'd do it (maybe throw a squal of randomized initial conditions to my system and find the Bayesian relationships between initial conditions and the emergent properties. I.e., stimulus and representation).

Sadly, this is outside the scope of my master's thesis, but I am actually quite interested in taking my models to the next level for my PhD thesis.
 
  • #74
Pythagorean said:
This is why there are reduced ANN's that only represent neurons in the two states (1 or 0). Because they are cheap on resources. But this is too extreme in the other direction now and you are missing out on interference and integrative effects of the passive currents that become important to diversity (i.e. optimization in the computation view).

Yep, again this is what we find it boils down to. Should our ontological basis be digital or analog? Or in fact, is it really all about the productive interaction of these "two worlds"?

Now we can attempt to solve that issue for every individual domain of inquiry. And that makes it largely a pragmatic question. Or we can attempt to solve it for the entire domain of science - discover a general law of nature that constrains all models in a "first principles" fashion.

That is the prize on offer. And we can at least assume that any functional brain system does manage to optimise these complementary imperatives. So nature could give us the big clue.

Grossberg is an example of this kind of big picture thinking. He looked for generalities (like the plasticity~stability dilemma) and then tried to cash in with a whole bundle of specific applications. But he may ultimately have been too computational.

Scott Kelso is another interesting case of a dynamicist seeking the general story to frame all the particular stories. But I think he erred too much on the dynamicist side.

Karl Friston is someone who from the start (being a younger generation) understood systems are driven by their twin imperatives, and the secret is to model the optimisation principle that lurks in their interaction.
 
  • #75
apeiron said:
And your comment about there being no unit of exchange apart from the human mind is baffling. Units of exchange are what a modelling mind would create, not what they would "be".

Yes, I was being sloppy. The above rephrasing is an acceptable interpretation of my short hand.

I think I'm understanding you very poorly. I'm basically expressing what seems to me plain common sense. I think you are trying to formalize something grander.

Let's take Friston's work as a concrete example, since you posted a link some time ago which was free and I made a quick read of. If I remember correctly, his basic point was that the input-output relationship of the brain can be described as extremizing some functional. Why is that dichotomous, and what is it dichotomous to?
 
  • #76
atyy said:
Let's take Friston's work as a concrete example, since you posted a link some time ago which was free and I made a quick read of. If I remember correctly, his basic point was that the input-output relationship of the brain can be described as extremizing some functional. Why is that dichotomous, and what is it dichotomous to?

The general dichotomy employed is the entropy~negentropy one of thermodynamics. So sensory surprise is treated as disorder, and sensory anticipation as order. What is optimised over the long-term is the minimisation of surprise, the maximisation of anticipation.

This is achieved in practice by the interaction between two activities...

Agents can suppress free energy by changing the two things it depends on: they can change sensory input by acting on the world or they can change their recognition density by changing their internal states. This distinction maps nicely onto action and perception.

Grossberg's ART put it rather more simply as the interaction between two levels of memory - short term and long term.

Friston in similar fashion maps the essential hierarchical interaction to actual brain architecture...

It shows the putative cells of origin of forward driving connections that convey prediction error (grey arrows) from a lower area (for example, the lateral geniculate nucleus) to a higher area (for example, V1), and nonlinear backward connections (black arrows) that construct predictions41. These predictions try to explain away prediction error in lower levels. In this scheme, the sources of forward and backward connections are superficial and deep pyramidal cells (upper and lower triangles), respectively, where state units are black and error units are grey. The equations represent a gradient descent on free energy using the generative model below. The two upper equations describe the formation of prediction error encoded by error units, and the two lower equations represent recognition dynamics, using a gradient descent on free energy.

The paper - http://www.fil.ion.ucl.ac.uk/~karl/The free-energy principle A unified brain theory.pdf - is a good example here because it tries to unite many models under the one generalised approach. So it weaves in optimal control theory, DST, and other stuff.

There are explicit appeals to dynamical concepts, like...

surprise here relates not just to the current state, which cannot be changed, but also to movement from one state to another, which can change. This motion can be complicated
and itinerant (wandering) provided that it revisits a small set of states, called a global random attractor, that are compatible with survival (for example, driving a car within a small margin of error). It is this motion that the free-energy principle optimizes.

And many sub-dichotomies are identified - such as the complementary nature of reward and error, or exploitation and exploration.

So generally, Friston is seeking two poles in interaction. And then the equilibrium balance point that optimises that interaction. Moreover, the interaction is hierarchical, with bottom-up degrees of freedom meeting top-down constraints.

On the question of whether the brain is really dynamical, or really computational, Friston's answer is clearly that it is a mix. And he tries to tie together the many earlier attempts at mixed models - like synergetics, autopoiesis, adaptive resonance, neural darwinism, optimal control theory, Hebbian cell assemblies, infomax, predictive coding, etc, etc - into one thermodynamics-based paradigm.

So Friston seeks to place a 50-plus year history of neuroscience models, which are all broadly dancing around the same anticipatory and hierarchical processing approach, on a shared footing, the free-energy principle, which in turn places neuroscience on the firm foundation of a branch of physics.

The free-energy principle is "the minimization of the free energy of sensations and the representation of their causes". So the dichotomy is the division into sensations and their causes - our impressions and our ideas. And the optimisation is about striking the balance between these two kinds of effort, so that we are expending the least possible effort in mentally modelling the world.
 
  • #77
apeiron, thanks for the long write-up. Let me ask questions in little bits to see if I understand you correctly.

Is the main point that free energy minimization is essentially maximization of entropy subject to the constraint of constant of energy, so the two poles are entropy and energy, with both poles equally fundamental?
 
  • #78
apeiron said:
The general dichotomy employed is the entropy~negentropy one of thermodynamics. So sensory surprise is treated as disorder, and sensory anticipation as order. What is optimised over the long-term is the minimisation of surprise, the maximisation of anticipation.

This is achieved in practice by the interaction between two activities...
Grossberg's ART put it rather more simply as the interaction between two levels of memory - short term and long term.

Friston in similar fashion maps the essential hierarchical interaction to actual brain architecture...
The paper - http://www.fil.ion.ucl.ac.uk/~karl/The free-energy principle A unified brain theory.pdf - is a good example here because it tries to unite many models under the one generalised approach. So it weaves in optimal control theory, DST, and other stuff.

I don't understand any of the paper and my following input is likely misled but it looks very interesting.. One thing that strikes me is that it looks like neuroscience is following a path of a "TOE" equivalent; By that I mean organizing everything under a more general pattern of processes.

Thoughts?
 
  • #79
Nano-Passion said:
I don't understand any of the paper and my following input is likely misled but it looks very interesting.. One thing that strikes me is that it looks like neuroscience is following a path of a "TOE" equivalent; By that I mean organizing everything under a more general pattern of processes.

Thoughts?

To a purist, a TOE might mean that everything in the universe can be describe by one equation (even emotions and consciousness). To others, it simply means unifying the specific case of gravity and QM.

TOE can mean different things, but yeah, the idea is toward generalization. I think that's the general direction of any theoretical approach.
 
  • #80
atyy said:
Is the main point that free energy minimization is essentially maximization of entropy subject to the constraint of constant of energy, so the two poles are entropy and energy, with both poles equally fundamental?

Sort of. A first clarification may be to swap the dichotomy of maximal~minimal for something more appropriate, like extremal~optimised. Otherwise the language tends to get in the way - as in dissipative structure theory where people can't decide whether they are talking about a maximum or minimum entropy production principle.

So the underlying theory, from a systems perspective is that in any situation you have the two extremal poles that separate the subject under discussion. The differentiation step. Then you have the complementary process of the synergistic mixing or integration, which is the optimisation action.

In terms of the thermodynamics of living/mindful structures - which is what we are talking about here with Friston - the opposing extremes would be complete certainty and complete uncertainty. Then the optimisation is the search for a productive balance of the two, over the spatiotemporal scales relevant to an organism. So for instance, we both want to know things for sure in a "right now" way and a "long term" way. Reducing uncertainty for one scale could increase it for the other. Therefore some kind of balance needs to be struck.

Also, uncertainty is about degrees of freedom still yet to be disposed. You can't teach an old dog new tricks, as they say. So that is another reason why a balance would want to be struck between a capacity to learn, to be creative due to uncertainty, and to be able to impose a certainty on thought and perception.

You can see I'm talking about all this in information theoretic terms. And that is the advantage of thermodynamics - its straddles the divide pretty well. So the usable energy~dissipated energy distinction in material reality can be formally equated to a certainty~uncertainty distinction in our subjective view. The maths of one can be used to describe the maths of the other.

And the relationship goes even deeper if you follow the infodynamics approach to dissipative structure because information is being used to regulate dynamics. Minds have a reason to exist - it is to control their worlds.

Anyway, the thing to get perhaps is that standard thermodynamics seems to say that the goal of reality is to entropify gradients. If a source of energy/order can be dissipated to waste heat/disorder, then it must be.

This does seem like a simple extremum principle - thou shalt maximise disorder! But it also hides the larger systems story. There have to be the two extremes to have a gradient (an initial state of order, a final state of disorder). And then the disordering has to actually happen in time. So there is an optimal rate for the process - which is the fastest possible perhaps, but as we can tell from the long history of our universe, not actually instantaneous.

Then from this baseline simpliciity, complexity can arise. Any region of the universe that is can accelerate the entropification rate can also afford a complementary measure of deceleration. Or in other words, life can arise as order (negentropy) because it is increasing the local disorder (entropy). And the way it does this is by capturing energy and turning it into stored information - the physical structure that is a body, its genes, its neural circuits.

So dissipative structure dips into the flow of entropification to build an informational self that can exist because it raises the general rate of entropification.

Now, there is a lifecycle balance to be optimised here as said. A living system is a mix of it genes and its metabolism, its information and its dynamics. It needs to be able to regulate its world, but there is a danger in trying to over-regulate.

In theoretical biology, Stan Salthe models this dilemma as the canonical lifecyle of immaturity, maturity, senescence. A gastrula is immature - lightly constrained, fast growing/entropifying, still many degrees of freedom open to it. A mature organism has a more structured balance - it no longer grows, but still repairs, still learns. Then a senescent individual is overloaded by informational constraints - it is well-adapted to its world, but in a now rigid and brittle fashion. It can no longer respond to sharp environmental perturbation and is subject to catastrophic collapse and material recyling.

Sorry, I seem to have strayed a long way from a direct answer. But I am trying to stress that there is first a rich thermodynamical basis now to life and mind science. And this base is natually a hybrid discourse as it can be talked about in both material and informational terms.

The complexity of life/mind is that it actually physically connects the material and the informational in a literal sense. It transcribes energy/matter into informational structure or memories - which are used in turn to control that realm of energy/matter (for the purposes of the second law of thermodynamics).

And then, all this fits into a systems perspective where you expect to find a causal organisation of exactly this type. You expect to find a fundamental division into complementary concepts, a differentiation that creates a spectrum of possibility, that then gets integrated to create a state of actuality.

So Friston has recognised that many of the most promising neuroscience models share something in common. Life and mind exist by playing the game of accelerating entropification. And they are by definition themselves a matching deceleration of that universal baseline entropification rate. But that can only happen by putting aside energy some place the rest of the universe cannot get at it - storing it as information, the configurational entropy of genes, neurons, cells, bodies, that can then act as material constraints on the world (via everything from the production of enzymes to the choice to reach out and stuff food in your mouth).

Now that in turns sets up the game of storing the optimal amount of information. Too much and scenescence beckons. Too little, and immature behaviour results. There has to be a balance that is optimised continually, dynamically.

Friston boils this down to the dichotomy of expectations vs surprise. And over a modestly long run (which in fact is just dealing with the mature phase of an organism's life cycle) the goal is to do the most with the least. To reduce uncertainty as much as is useful (rather than as much as is possible) while storing the least amount of information (ie: creating the most generalised anticipations).
 
  • #81
OK, I'm fine with using "extremize". And yes, I agree within the theory the two poles are both fundamental.

When I was saying no theory is fundamental, I simply meant the simple common sense that we have no TOE. For example, Friston's theory uses probability densities, which means he either requires ensembles, in which case individual behaviour is not predicted, or if it is a theory of individual behaviour, it requires ergodicity, which we know does not hold in general. Similarly, information theory requires ergodicity, which does not hold in general. Even our theory that has to most right to be called fundamental - the Einstein-Hilbert action and the standard model - is not fundamental, since it isn't a UV complete quantum theory.

So I can agree with two poles being fundamental, as well as not fundamental.
 
  • #82
atyy said:
When I was saying no theory is fundamental, I simply meant the simple common sense that we have no TOE.

OK, and I am saying if we did have a TOE, it would have two fundamental poles. Or at least be self-dual - internalising its essential dichotomy :smile:
 
  • #83
apeiron said:
OK, and I am saying if we did have a TOE, it would have two fundamental poles. Or at least be self-dual - internalising its essential dichotomy :smile:

:smile: You must be a big string theory fan! Or does that have too many dualities? :smile:
 
  • #84
apeiron, would you consider Friston's ideas to be related to variational methods in statistical models, eg. Yedidia et al's work linking Pearl's belief propagation to the Bethe approximation? Pearl's algorithm is like a dynamical or control system since it proceeds stepwise in time. The Bethe approximation is an explicit approximation to a thermodynamical free energy. There's earlier work too reviewed IIRC by Ghahramani and Jordan.
 
  • #85
atyy said:
apeiron, would you consider Friston's ideas to be related to variational methods in statistical models, eg. Yedidia et al's work linking Pearl's belief propagation to the Bethe approximation? Pearl's algorithm is like a dynamical or control system since it proceeds stepwise in time. The Bethe approximation is an explicit approximation to a thermodynamical free energy. There's earlier work too reviewed IIRC by Ghahramani and Jordan.

I don't see this as the same because Friston is talking about learning networks and these are just pattern matching ones I believe. One predicts its inputs and so is optimised for its forward modelling, the other finds optimal matches when driven by some set of inputs. The free energy principle in the first refers to the level of prediction error, in the second, it just relates to the efficient creation of the prediction.

So while free energy concepts could be invoked for both cases, only the former has the kind of biological realism that interests me. Although pattern matching networks could be considered to be embedded as part of the larger machinery of a generative network.

This paper might explain Friston's approach in better detail.

http://www.fil.ion.ucl.ac.uk/~karl/The%20free-energy%20principle%20a%20rough%20guide%20to%20the%20brain.pdf

For instance, note how there is a dichotomy of error units~state units built into the circuitry so as to have messages propagating both bottom-up and top-down.

Under hierarchical models, error-units receive messages from the states in the same level and the level above; whereas state-units are driven by error-units in the same level and the level below... This scheme suggests that connections between error and state-units are reciprocal; the only connections that link levels are forward connections conveying prediction error to state-units and reciprocal backward connections that mediate predictions.
 
Last edited by a moderator:
  • #86
Yes, the Yedidia et al paper is only about formalism. Just like Newton's second law, which is not applicable to any real system until one specifies the form of F and operational meaning of the variables. It's more related to my interest about the relationship between two formalisms: how far can dynamical or control systems can be viewed in variational senses? The two textbook physics examples are the Lagrangian formulation of mechanics (which can be extended to some dissipative systems), and the relationship between kinetic theory and statistical mechanics.

Can I also get Pythagorean's view whether Yedidia et al's work counts as dynamical systems theory in the Poincare sense - ie. view Pearl's "Belief Propagation" algorithm as a dynamical system since it proceeds stepwise in time, and view the study of its "fixed points" (their terminology!) as being analogous to Poincare's concern for phase space topology?
 
Last edited:
  • #87
It doesn't appear to be dynamical systems theory in any straightforward way. It seems mostly probabilistic and not so much mechanistic. I see a lot of graph theory and statistics. I would say this is much more on the computational end of the spectrum.
 
  • #88
Pythagorean said:
It doesn't appear to be dynamical systems theory in any straightforward way. It seems mostly probabilistic and not so much mechanistic. I see a lot of graph theory and statistics. I would say this is much more on the computational end of the spectrum.

Hmm, but would you count an algorithm as a dynamical system since it proceeds stepwise in time with each step determined by the previous step? And consider questions about convergence as questions about fixed points? These seem closely related unless only continuous space and time are allowed in your view.
 
Last edited:
  • #89
atyy said:
Hmm, but would you count an algorithm as a dynamical system since it proceeds stepwise in time with each step determined by the previous step?

I would argue that algorithms and equations are both examples of timeless modelling - in the formal computational description, rate no longer matters. And meanwhile back in reality, time in fact matters. It creates the critical dependencies.

So for example in generative neural networks like Friston's, there is one "algorithm" in the notion of the "optimisation of adaptedness", yet in a real system, this adaptation of state has to occur over multiple timescales. The short-term activity is nested within the long-term. That is why it is a learning system - it is a hierarchy of levels of memory, all doing the same thing, but across a spread of spatiotemporal scales.

Now dynamical approaches arose by allowing for feedback and iteration. So the algorithm - as an essentially timeless seed or kernel process - is then allowed to play out in time to generate some larger pattern. A fractal would be a good example. A timeless algorithm gets played out over all scales eventually (though it would take infinite time to fill in the full pattern).

However this is still a very partial inclusion of time into the modelling. It is not the kind of complex dynamics we might get from something like an ecosystem where organisms affect their environment, while the environment in turn constrains those organisms. Here we have multiple spatiotemporal scales of action in interaction, rather than merely a kernel process unfolding into an unchanging void. Kind of like the difference between Newtonian and relativistic mechanics when it comes to modelling dynamics.

So there is simple dynamics, where the parameters are fixed, and complex dynamics where the parameters are themselves dynamic - developing or evolving.
 
  • #90
The following is not a dynamical systems approach, persay, but are methods generally accepted to be necessary for confining the solution space of a dynamical system.

The following book explains metahueristic approaches (in general, not just biology). I find two approaches particularly interesting: exploration and exploitation. I think designing a good AI would require utilizing both, and additionally, the AI program "knowing" when to switch between exploration and exploitation.

Metaheuristics: From Design to Implementation
El-Ghazali Talbi
ISBN: 978-0-470-27858-1

Genetic/evolutionary algorithms are an example of a heuristic approach that steals ideas from nature, particularly the implementation of a stochastic optimization.

atyy said:
Hmm, but would you count an algorithm as a dynamical system since it proceeds stepwise in time with each step determined by the previous step? And consider questions about convergence as questions about fixed points? These seem closely related unless only continuous space and time are allowed in your view.

I agree that a mapping system is still a dynamic system, I guess I just don't see the the mapping equation explicitly and I wouldn't know how to analyze this system, but this is probably due to my ignorance. Thinking about metaheuristics though, I kind of arrived at some kind of intuition about the mapping in a dynamical sense.
 
Last edited:
  • #91
Interesting discussion pythagorean, atty, and aperion. I have a question regarding the paper, and an observation that leads to a second question at the end. I took the time to read and redline the paper aperion posted in post #76. I would like clarification on page 6, right side, middle of the "Biased competition and attention paragraph":
The most obvious candidates for controlling gain (and implicitly encoding precision) are classical neuromodulators like dopamine and acetylcholine,which provides a nice link to theories of attention and uncertainty75–77

I always thought dopamine and acetycholine were neurotransmitters versus neuromodulators ?
The paper - http://www.fil.ion.ucl.ac.uk/~karl/T...n%20theory.pdf - is a good example here because it tries to unite many models under the one generalised approach. So it weaves in optimal control theory, DST, and other stuff.

I think that whatever theory(s) and model(s) describe how the brain learns, adapts and responds to injury should consider results from experiments done in the past. Specifically, in my posts https://www.physicsforums.com/showpost.php?p=2925375&postcount=25 and https://www.physicsforums.com/showpost.php?p=2971857&postcount=30 from my plasticity thread. Excerpts below, regarding brain maps arranging themselves in topographical order in response to severing nerves and then observing the results experimentally using micro probes after surgery. My point is there is a physical limit in the area of adaptation (thought to be 1 to 2 centimeters, but through experiment observed to be almost one half of an inch !)
Post #25
To make a long story short, a colleague of Merzenich's at Vanderbilt, Jon Kaas, worked with a student, Tim Pons who wondered, was one to two centimeters the limit for plastic change ? I bet some of you can guess where this idea is going, an experiment, right ? But how ? The answer lay in the Silver Springs monkeys, because they alone had spent twelve years without sensory input to their brain maps, Ironically, PETA's interference for all those years had made them increasingly valuable to the scientific community. If any creature had massive cortical reorganization that could be mapped it would be one of them.

All of the monkeys were aging, but two in particular were in very bad heath and close to death. PETA lobbied the NIH to have one, Paul, euthanized. Mortimer Mishkin, head of Neuroscience and chief of the lab of Neuropsychology at NIH, who many years before had inspected Taub's first deafferentation experiment that overturned Nobel Prize winner's Charles Sherrington's reflexological theory. Miskin met with Tim Pons, agreeing that when the monkeys were to be euthanized, a final experiment could be done, one that would hopefully answer Pon's question. This was a brave decision, since Congress was still on record as favoring PETA. For this reason, they left the government out of it and performed it entirely with private funds. The pressure and fear of repercussion was immense. They performed the procedure in four hours, which normally took a whole day to complete. They removed part of the monkey's skull, and inserted 124 electrodes in different spots of the sensory cortex map for the arm, then stroked the deafferentiated arm. As expected, the arm sent no impulses to the electrodes. Then, Pons stroked the monkey's face, knowing that the brain map for the face is right next to the one for the arm. The neurons in the monkey's deafferentiated arm map began to fire, confirming that the facial map had taken over the arm map. As Merzenich had seen in his experiments, when a brain map is unused, the brain can organize itself so another mental function can take over the processing space. Most surprising was the scope of the organization, over a half of an inch ! Holy crap... that to this humble observer is freaking amazing. The monkey was then euthanized. Over the next six months, this experiment was repeated with three more monkeys, with the same results. Taub had proved that reorganization in damaged brains could occur in very large sectors giving hope to those suffering from severe brain injury.

and post #30

Merzenich, Paul, and Goodman wanted to find out when a peripheral nerve is cut, in the process of regeneration, the axons reattach to the wrong nerve. When this happens a person experiences a "false localization" so that touch that should be felt in an index finger is instead felt in the thumb. Up to this time, scientists believed that the signal from the nerve passed to a specific point on a brain map. Merzenich and his team accepted the "Point to Point" model. They set out to document what happened in the brain during the shuffling of nerves. Instead as they laboriously recorded the neuronal brain maps, they discovered that the signals were "topographically arranged" as the brain had unshuffled the signals from the crossed nerves. This insight forever changed Merzenich's life. Second, the topographically arranged maps were forming in slightly different brain areas than had been observed before the nerves were cut.

Fast forward, as time passed and more and more of Merzenich's experiments convinced him beyond a shadow of doubt in his mind and the mind of close associates who conducted brain mapping experiments with him that the brain of his test subjects changed every few weeks in cases where no major injury would disturb the brain maps. Merzenich's rejection of localization in the adult brain ran into predictable stiff opposition. In Merzenich's words, "Let me tell you what happened when I began to declare that the brain was plastic. I received hostile treatment. I don't know how else to put it. I got people saying things in reviews such as, 'This would be really interesting if it could possibly be true, but it could not be.' It was as if I just made it up." His critics believed his experiments were sloppy and that the effects described in the results were uncertain. (Recall for a moment how precise the micro-mapping location and sensitivity signals were early in this post. Obviously, many of Merzenich's critics did not do an unbiased assessment of his research). Torsten Weisel the Nobel prize winner now admits that localization in adulthood is wrong and has gracefully acknowledged in print that he was wrong, and that Merzenich and his teams work ultimately led him and his colleages to change their minds. Remaining hardcore localization people took notice when a Torsten admitted localization was wrong. His admission led to mainstream acceptance of brain plasticity being accepted in mainstream neurological circles. To summarize, localization existed as a tenant of mainstream belief for almost 70 years until proven wrong by Merzenich and his remarkable experiments.

This brings me to tantalizing and as yet unsolved questions. We know that brain maps arrange themselves in topographical order, meaning that the map is ordered as the body itself is ordered. We now know that topographic order appears because many of our everyday activities involve repeating sequences of movement in a fixed order. Second, brain maps work by grouping together events that happen together. The audio cortex is arranged like the keys of a piano, with low notes on one end and high ones on the other. Form follows function in that sounds come together with each other in rising sequence in nature. But what causes the audio cortex to arrange itself this way. Obviously we can see this from testing the audio cortex, but what underlying principle or laws of physics allow this "natural arrangement" to be possible ? And as if that question were not vexing enough, how about this, as we get better at a new skill or task be it motor or mental, individual neurons under observation became more selective with improvement. For instance, the brain map for the sense of touch has a "receptive field", a segment on the skin's surface that "reports" to it. As the monkeys were trained to feel the object, the receptive fields of individual neurons got smaller, firing when only parts of the fingertip touched the object. Thus, despite the fact that the size of the brain map increases, each neuron in the map became responsible for a smaller part of the skin surface, allowing for finer touch discrimination. Overall the map became "more precise". Again begging a deeper question, what underlying as yet not understood principle makes this possible ? Finally to add a third vexing question to this, as the neurons are trained they became most discriminatory, and faster. In one experiment Merzenich and his team trained monkeys to discriminate sound in shorter and shorter spans of time. The trained neurons fired more quickly in response to the faster sound, processed them in shorter time periods, needed less time between firings. Faster neurons ultimately lead to faster thought, because speed is thought to be a crucial component of intelligence. The faster firing signals got "clearer", meaning they tend to synchronize with one another, leading to a stronger signal, they become team players so to speak. A powerful signal has greater impact on the brain. When we want to remember something, it is crucial that we must hear it clearly. Lasting change only occurs in brain maps when the subjects "pay close, undivided to the task at hand".

Sorry for the long winded reiterating sections of my posts, I needed them to lay out my case. Do you believe that any theory(s), model(s) have to account for the observations with Merzenich's Silver Spring monkeys ? His nerve severing experiments and measuring the movement of the brain maps offer compelling evidence and measurable physical limits. These experiments offer hard data (to my knowledge never repeated since Merzenich's original experiments due to the controversy at performing them).

Do you believe that mathematical model(s) and theory(s) must account for and accommodate the areas observed in Merzenich's experiments ? Personally, I do, and value your opinions. The results beg for a logical and hopefully mathematical explanation for them.

BTW. Merry Christmas to all of you...

Rhody... :smile:
 
  • #92
Pythagorean said:
I agree that a mapping system is still a dynamic system, I guess I just don't see the the mapping equation explicitly and I wouldn't know how to analyze this system, but this is probably due to my ignorance. Thinking about metaheuristics though, I kind of arrived at some kind of intuition about the mapping in a dynamical sense.

In my understanding, dynamical systems are basically Markovian systems. They can be divided according to whether their state space and time are continuous or discrete. When both are continuous, a differential geometric approach is possible.

There are 3 sorts of systems that appear to (but don't really) fall outside these systems:
1) control systems - these receive an input that is the "external stimulus" in biology or "external control" in engineering. In the continuous state space and time, the differential geometric approach can be extended through the use of Lie brackets (the standard example is parallel parking).
2) non-Markovian systems - these arise from Markovian systems in which we do not have explicit knowledge of at least one degree of freedom. In some cases, limited aspects of the full Markovian system can be recovered, eg. in the continuous space and time case, where there is an attractor, Ruelle-Takens embedding recovers the attractor topology. A related problem in Engineering is the minimal (dynamical) realization of a linear filter.
3) stochastic systems - these arise from Markovian systems in which we do not have explicit knowledge of the initial conditions or external stimulus.

Pythagorean said:
The following is not a dynamical systems approach, persay, but are methods generally accepted to be necessary for confining the solution space of a dynamical system.

The following book explains metahueristic approaches (in general, not just biology). I find two approaches particularly interesting: exploration and exploitation. I think designing a good AI would require utilizing both, and additionally, the AI program "knowing" when to switch between exploration and exploitation.

Metaheuristics: From Design to Implementation
El-Ghazali Talbi
ISBN: 978-0-470-27858-1

Genetic/evolutionary algorithms are an example of a heuristic approach that steals ideas from nature, particularly the implementation of a stochastic optimization.

Hmmm, is that the same exploration and exploitation as in http://www.ncbi.nlm.nih.gov/pubmed/20410125 ?

rhody said:
I always thought dopamine and acetycholine were neurotransmitters versus neuromodulators ?

Dopamine and acetylcholine are "non-classical" neurotransmitters and are called neuromodulators, because they act on different time scales from the fast "classical" neurotransmitters.

rhody said:
I think that whatever theory(s) and model(s) describe how the brain learns, adapts and responds to injury should consider results from experiments done in the past. Specifically, in my posts https://www.physicsforums.com/showpost.php?p=2925375&postcount=25 and https://www.physicsforums.com/showpost.php?p=2971857&postcount=30 from my plasticity thread. Excerpts below, regarding brain maps arranging themselves in topographical order in response to severing nerves and then observing the results experimentally using micro probes after surgery. My point is there is a physical limit in the area of adaptation (thought to be 1 to 2 centimeters, but through experiment observed to be almost one half of an inch !)Sorry for the long winded reiterating sections of my posts, I needed them to lay out my case. Do you believe that any theory(s), model(s) have to account for the observations with Merzenich's Silver Spring monkeys ? His nerve severing experiments and measuring the movement of the brain maps offer compelling evidence and measurable physical limits. These experiments offer hard data (to my knowledge never repeated since Merzenich's original experiments due to the controversy at performing them).

Do you believe that mathematical model(s) and theory(s) must account for and accommodate the areas observed in Merzenich's experiments ? Personally, I do, and value your opinions. The results beg for a logical and hopefully mathematical explanation for them.

I'm not specifically familiar with which papers deal with the Silver Spring monkeys (Edit: Reading Rhody's quote, the Silver Spring Monkeys were not Merzenich's, but Edward Taub's). However, work by Merzenich such as http://www.ncbi.nlm.nih.gov/pubmed/6725633 and http://www.ncbi.nlm.nih.gov/pubmed/9497289 is generally considered to be implemented by some form of Hebbian learning (change in synaptic strength as a function of correlation between pre and post-synaptic activity). The detailed mathematical description of the learning rule is still unknown because several factors that may be important are experimentally poorly described. One factor is whether it is necessary for the presynaptic neuron to spike before the postsynaptic neuron. Second is the influence of neuromodulators such as dopamine and acetylcholine. Third, the detailed circuitry of the system is unknown and apparently complicated, so which synapses the changes occur at is unknown.

Experiments trying to look at these include:
http://www.ncbi.nlm.nih.gov/pubmed/16423693
http://www.ncbi.nlm.nih.gov/pubmed/16929304
http://www.ncbi.nlm.nih.gov/pubmed/18004384

Theoretical work includes (I'm casting very widely, since these mechanisms may occur throughout the cortex)
http://www.ncbi.nlm.nih.gov/pubmed/11684002
http://www.ncbi.nlm.nih.gov/pubmed/17444757
http://www.ncbi.nlm.nih.gov/pubmed/20573887

rhody said:
BTW. Merry Christmas to all of you...

:biggrin:
 
Last edited by a moderator:
  • #93
rhody said:
Do you believe that mathematical model(s) and theory(s) must account for and accommodate the areas observed in Merzenich's experiments ? Personally, I do, and value your opinions. The results beg for a logical and hopefully mathematical explanation for them.

I don't find anything surprising in the evidence of cortical plasticity because the brain is "dynamic" - ie: adaptive - over all scales.

It is only surprising if you presume the brain must be constructed bottom-up out of definite hardware components. And given neurons are built out molecular components like microtubles with a half-life of about 10 minutes, this seems a silly presumption indeed.
 
  • #94
I suppose my definition of dynamical systems has been rather narrow; I have never worked with systems discretized in time, so it is tough for me to identify them. Are stochastic systems in general, always dynamical systems? I thought it was a more general statement about a probabilistic approach and didn't necessarily require time-evolution considerations.

From atyy's abstract (pertaining to the exploration/exploitation discussion):
This circuit generates song variability that underlies vocal experimentation in young birds and modulates song variability depending on the social context in adult birds.

Yes, this sounds like an example of what I was imaging.
 
  • #95
Pythagorean said:
I suppose my definition of dynamical systems has been rather narrow; I have never worked with systems discretized in time, so it is tough for me to identify them. Are stochastic systems in general, always dynamical systems? I thought it was a more general statement about a probabilistic approach and didn't necessarily require time-evolution considerations.

Yes, you are right. In general only stochastic systems with an infinite number of variables (one for each time) are considered stochastic dynamical systems. However, it is known that low-dimensional chaotic systems have ergodic attractors that give rise to probabilities (usually called measures) :biggrin:

In the context of neurobiology and Poincare-Izhikevich type analyses, you might be interested in Gutkin and Ermentrout's work on how Poisson-like statistics can be generated.

However, very, very long transients can also masquerade as "attractors" and produce behaviour that is ergodic for all practical purposes: http://www.ncbi.nlm.nih.gov/pubmed/19936316.

Pythagorean said:
Yes, this sounds like an example of what I was imaging.

You may find the background to Leblois et al's work interesting. Xie and Seung present an example of a continuous state and time dynamical rule with stochastic input. The mathematical analysis is hard so they make a heuristic replacement with a continuous state and discrete time system (which I think is non-Markovian) and show that that system does gradient ascent on the reward. Their discrete time rule is very close to the reinforcement learning rules studied in artificial intelligence beginning in the late 1980s, and from which "exploration" and "exploitation" concepts developed (reinforcement learning itself was inspired by even older biology). In addition to Leblois et al's work, you can see this feedback into current work in the models of eg. Fiete and Seung (bird song) or Legenstein et al (brain-machine interfaces). In short: http://chaos.aip.org/resource/1/chaoeh/v21/i3/p037101_s1?view=fulltext&bypassSSO=1 (ok, I admit Crutchfield can be a bit over the top :smile:)
 
Last edited:
  • #96
atyy said:
http://chaos.aip.org/resource/1/chaoeh/v21/i3/p037101_s1?view=fulltext&bypassSSO=1

I put a wrong link there, it should be http://chaos.aip.org/resource/1/chaoeh/v20/i3/p037101_s1?bypassSSO=1.
 
Last edited:
  • #97
I've actually alway considered computational a subset of dynamical; but I'm not sure about language difference and semantics a lot because everyone in 'complexity' has the same language for different things.
 
  • #98
(General post following, not based on prior discussion persay, just spirit of thread.)

So, there are seven known bifurcations in dynamical systems. The last one discovered was discovered in the 1990's and it has probably the fanciest name of all the bifurcations, "Blue Sky Cotastrophe".

http://www.scholarpedia.org/article/Blue-sky_catastrophe

So far, I have only seen it used in applications for biological systems; I wonder if it could be a defining feature of life in the spirit of the book Towards a Mathematical Theory of Complex Biological Systems which gives 10 defining characteristics of life to be quantified by mathematics.
 
  • #99
Pythagorean said:
I've actually alway considered computational a subset of dynamical; but I'm not sure about language difference and semantics a lot because everyone in 'complexity' has the same language for different things.

Let me ask one more question about semantics - these are meaningless - but they are fun!

Do you consider any system of ordinary differential equations a dynamical system, or does the evolution parameter have to represent time?

For example, in the renormalization group, which represents a type of emergence, there are ordinary differential equations. The existence of fixed points of the flow is a typical question (Hollowood, first figure - it will warm :devil: your geometric heart). However the evolution parameter is not time, but resolution scale. Would you consider that a dynamical system?

Funnily, in the AdS/CFT correspondence of string theory there seems to be a sort of holographic emergence in which the renormalization group resolution scale becomes a spatial dimension (McGreevy, Fig 1).

Pythagorean said:
So, there are seven known bifurcations in dynamical systems. The last one discovered was discovered in the 1990's and it has probably the fanciest name of all the bifurcations, "Blue Sky Cotastrophe".

http://www.scholarpedia.org/article/Blue-sky_catastrophe

That is very interesting indeed. Is it a sort of intermittency?

A quick google indicates that it is not (Thompson & Stewart, p264). It seems that there's hysteresis in blue sky, but not in intermittency (Medio and Gallo, p171).
 
Last edited:
  • #100
atyy said:
Do you consider any system of ordinary differential equations a dynamical system, or does the evolution parameter have to represent time?

I've always considered it (possibly incorrectly) a dynamical system as long as the dynamics aren't stagnant. I.e. if the physical solution is steady state or periodic, then it is a system that does not evolve or "go anywhere". If the solutions of the system are chaotic (asymptotic), it is necessarily a dynamical system by this definition.

Of course there's stable chaos and transient chaos, too. Stable chaos isn't real chaos... it doesn't have exponentially diverging perturbations, but it doesn't appear to be steady-state or periodic either so I'd give it the benefit of the doubt. Transient chaotic systems spend a long time in a dynamical state. Long enough to give rise to interesting spatiotemporal structures, during which the short-time lyapunov exponent is positive... so I would call them dynamical systems too.
 
  • #101
I thought this was interesting and worth sharing. TED: Antonio Damasio: The quest to understand consciousness. Here is a nice view of real axional connections in the brain and the directionality of their pathways. His talk is geared toward "what" the brain does as he best understands it. The how the brain does it is what the three of you have been discussing here. I thought it is useful to put into context.

http://img833.imageshack.us/img833/2078/connectionsinthebrain.jpg

http://img859.imageshack.us/img859/4840/axionalconnections.jpg

Backing up a bit to my post and the responses:

Thanks for your explanation of dopamine and acetycholine, atty, now I understand, and for the links.
Dopamine and acetylcholine are "non-classical" neurotransmitters and are called neuromodulators, because they act on different time scales from the fast "classical" neurotransmitters.

aperion, you said.
It is only surprising if you presume the brain must be constructed bottom-up out of definite hardware components. And given neurons are built out molecular components like microtubles with a half-life of about 10 minutes, this seems a silly presumption indeed.

You mention a time component of a half life of about ten minutes for microtubules, and I was referring to a distance of about one half of an inch of change observed in the experiment of the nerves on a monkey's deafferentiated arm. What does the half life of a microtubule have to do with the distances, up to one half of an inch in the measurement of activity in an up to that time unused brain region ?

See excerpt of https://www.physicsforums.com/showpost.php?p=2925375&postcount=25 below:
They performed the procedure in four hours, which normally took a whole day to complete. They removed part of the monkey's skull, and inserted 124 electrodes in different spots of the sensory cortex map for the arm, then stroked the deafferentiated arm. As expected, the arm sent no impulses to the electrodes. Then, Pons stroked the monkey's face, knowing that the brain map for the face is right next to the one for the arm. The neurons in the monkey's deafferentiated arm map began to fire, confirming that the facial map had taken over the arm map. As Merzenich had seen in his experiments, when a brain map is unused, the brain can organize itself so another mental function can take over the processing space. Most surprising was the scope of the organization, over a half of an inch ! Holy crap... that to this humble observer is freaking amazing. The monkey was then euthanized. Over the next six months, this experiment was repeated with three more monkeys, with the same results. Taub had proved that reorganization in damaged brains could occur in very large sectors giving hope to those suffering from severe brain injury.

Rhody...
 
Last edited by a moderator:
  • #102
rhody said:
You mention a time component of a half life of about ten minutes for microtubules, and I was referring to a distance of about one half of an inch of change observed in the experiment of the nerves on a monkey's deafferentiated arm. What does the half life of a microtubule have to do with the distances, up to one half of an inch in the measurement of activity in an up to that time unused brain region ?

You are framing this as a "problem of plasticity", whereas I am pointing out the contrary issue - the difficulty in creating organisational stabiliity. If all the parts are fluid, how do you ever get anything to stand still?

So the puzzle from a biological point of view is stasis rather than flux. How come the cortical maps don't just change all the time and it takes fairly radical surgery, growth and relearning to make a significant change in them?

In fact from memory, the likely story in the case of this particular experiment is that the wider neural connections (from finger to facial maps) already existed. They just would have been very weak. So nothing new would have to grow over that half-inch in fact. There would just have to be upregulation of dendrites and synapses, which happens in hours.
 
  • #103
apeiron said:
So the puzzle from a biological point of view is stasis rather than flux. How come the cortical maps don't just change all the time and it takes fairly radical surgery, growth and relearning to make a significant change in them?
Fast forward the link to the TED talk for 12:00 and listen to what Antonio Damasio has to say about this, at 14:00 minutes discusses how the structures, he calls them modules in the diagram "create brain maps that are exquisitely topographic, and exquisitely interconnected in a recursive pattern." He also goes onto what brain areas give rise to "the self" (14:20 - 14:50). Give it a look and see what you think. I understand that you, atty and pythagorean are trying to cover all the bases. A noble but difficult endeavor. It takes persistence, going down false paths, even failure at times to discover the truth about what happens inside of our noggins.

Rhody...
 
  • #104
As apeiron points out the brain plastic is both good and bad. The plastic brain is what allows sound localization in some animals to remain accurate even though their heads change as they age. It allows us to learn new things and recover from brain injury. However, severe tinnitus due to brain plasticity is "maladaptive". So the brain should have some means of regulating its plasticity according to age, as it does by the critical period; and according to behavioural necessity, which involve rhody's neuromodulators. Zhou et al summarize this in their introduction of this paper (free!).

When one sees change in the brain, the synapse that changed is not necessarily near by. To provide a naive example, if one neuron connects to ten, and each of those connect to another ten, then a change in one synapse at the first layer would change the 100 neurons in the last layer, without additional synapses changing. Apeiron mentions that the inputs were probably already there but weak, so that not much neurite lengthening would be needed, just more anatomically local changes. The experimental papers I linked to in post #92 (abstracts only, unfortunately) try to look at weak inputs using intracellular recording. Work that shows that some of the changes are non-local enough to be visible by light microscopy includes Antonini et al and Xu et al.

I remember an interview of Alfred Brendel about trying to learning new fingerings for a piece of music, and how in a moment of stress one reverts to the old fingerings. Most have probably had similar experiences. Zheng and Knudsen did an interesting study that shows the old maps are still there in some sense. Vogels et al's new modelling study, which I hope has enough continuous time evolution for Pythagorean to consider dynamical:) "can accommodate synaptic memories with activity patterns that become indiscernible from the background state but can be reactivated by external stimuli." The background state is a state that is experimentally probabilistically described, and theoretically thought to represent chaos, stable chaos, or transient chaos (Pythagorean, did I get your attention :smile:).

rhody said:
He also goes onto what brain areas give rise to "the self" (14:20 - 14:50).

rhody, thanks for that terrific link. Damasio's talk is wonderfully argued as usual! I'd be interested to know what you think of Holland and Goodman's proposal. What is common to Damasio's and Holland and Goodman's proposals is that there is a part of the brain that makes a model of itself and its interaction with the environment. Probably the difference is that Holland and Goodman's internal models are inspired by work on motor control, and I had myself similarly guessed that the cerebellum :-p is the seat of consciousness. In contrast, Damasio proposes brainstem areas, focussing in particular on the midbrain periaquaductal gray. Most curiously, Wikipedia's article on the PAG explicitly addresses its role in consciousness, and links to comments by Patricia Churchland (about 20 minutes in).
 
Last edited:
  • #105
rhody said:
Fast forward the link to the TED talk for 12:00 and listen to what Antonio Damasio has to say about this, at 14:00 minutes discusses how the structures, he calls them modules in the diagram "create brain maps that are exquisitely topographic, and exquisitely interconnected in a recursive pattern." He also goes onto what brain areas give rise to "the self" (14:20 - 14:50). Give it a look and see what you think. I understand that you, atty and pythagorean are trying to cover all the bases. A noble but difficult endeavor. It takes persistence, going down false paths, even failure at times to discover the truth about what happens inside of our noggins.

Rhody...

I don't really get the point you are trying to make. The brainstem has very little developmental plasticity, the cortex a tremendous amount.

And there are no surprises in Damasio's talk - except where he says the optic nerve apparently exits throught the foveal pit. :smile:
 
Back
Top