jpaul said:
So I've been studying how the brain represents and encodes information.
My own interest has been in how the brain works and how likely it is that such functionality evolves (astrobiology slant). And as suggested in the thread, that has taken me to meso- and macroscales. (Neurotransmitters are more useful to grok phylogenies there.)
I am also wary of the use of "information", as it is a set of loose constraints on a system, and rarely contribute useful ... well, information ... on it. So, with a layman selected unfamiliarity with the territory I have found these items of interest to me:
1. A robust way of long range information transmission in dynamical neural networks is to place them near criticality, with a branching likelihood of 1. I.e. if they branch too eagerly your network is swamped, if they branch too rarely a signal dies. This paper has studied in vitro neural tissue to confirm that it is possible:
[
http://www.jneurosci.org/content/23/35/11167.full.pdf html ]
- Now the caution of DiracPool applies, if this is an actual mode used in vivo.
- Another caution is that they rely on power laws over a mere 1-2 orders of magnitude, where you can always find something close to it and imply "fractal scaling". Eminent statistician Cosma Shalizi has criticized this repeatedly and developed bona fide statistical tests for power law behavior. Not surprisingly perhaps, he found that half of the 'power law' results out there are as well or better fitted with exponentials. This paper didn't do any of those tests IIRC.
That said, I also found the initial whole-brain imaging results of activity too chaotic to be modeled with self-organized "avalanches":
That is one brain on crazy!
However, later repeats feed the brains with organized information (the simulated background stripes on top left, cues to the fish to move forward or backwards to "keep in place"), and then you get organized behavior:
[
http://www.wired.com/2014/07/neuron-zebrafish-movie/ ; a lot more details here]
So maybe self-criticality is one way that organizes parts of brains, to complement the more usual "channel" behavior of the brain stem. (A behavior that can be easily identified in the 2nd movie, by the way!) And here "information" puts a useful constraint for once.
2. On larger scales, on the way to the symbolic processing of combinatorial languages such as ours, it seems the cortex of vertebrates (and so likely the homologous mushroom bodies of arthropods) self-organizes symbol learning. Presumably evolving robust learning has enforced evolution of a specific structure.
That is a way to get around the generic problem with neural network learning, over-training. (I.e. that the network learns too many specific quirks of the training material, so can't recognize it in nature. For example, if only fed faces oriented normally during training, an upside down face photo would be classified as "without face".)
"In this article from the Proceedings of the National Academy, Rougier et al. demonstrate how a specific network architecture - modeled loosely on what is known about dopaminergic projections from the ventral tegmental area and the basal ganglia to prefrontal cortex - can capture both generalization and symbol-like processing, simply by incorporating biologically-plausible simulations of neural computation. ...
In particular, it had developed abstract representations of feature dimensions, such that each unit in the PFC seemed to code for an entire set of stimulus dimensions, such as "shape," or "color." This is the first time (to my knowledge) that such abstract, symbol-like representations have been observed to self-organize within a neural network.
Furthermore, this network also showed powerful generalization ability. If the network was provided with novel stimuli after training - i.e., stimuli that had particular conjunctions of features that had not been part of the training set - it could nonetheless deal with them correctly."
[
http://develintel.blogspot.se/2006/10/generalization-and-symbolic-processing.html ]
Intriguingly the symbol-like behavior stems from a self-organized map of the active memory storage nodes, the "shape" and "color" dimensions mapping as PFC units within the network. That is (handwavingly) reminiscent of this year's medicine Nobel Prize find of "place" and "grid" neuron assemblies of the part of the hippocampus used in mammals to find their way around.
FWIW, I have never found much use for "information" constraints re symbol/map handling. They seem to be their own thing(s), quite different from (say) a template of Shannon information channels that can transmit such symbols. (But again, layman here.)