|Sep10-12, 04:47 AM||#18|
Comp. Neuro. introductory textbooks
The idea that emergent behaviour influences constituent components is a modelling issue. For example, Nunez models the neocortex in terms of neurons embedded in global synaptic fields (http://plaza.ufl.edu/johncad/nunez.pdf). A bottom-up model would have no need to explicitly introduce a top-down influence, it would simply emerge through the low-level dynamics.
|Sep10-12, 10:59 AM||#19|
A lot of problems come down to the "levels fallacy". People have this belief that, for instance, life is explained completely by cells, cells completely by molecules, molecules completely by subatomic particles. But this isn't exactly true. A life form has more than cells: it has digestive juices, connective tissues, and stimulus from environment (or other organisms) on it. So the levels aren't perfectly isolated, each contributing to the next. They're mixed up in a complicated hierarchy. At some point, you have to be careful to acknowledge that the line you draw between one level and the next may be arbitrary.
|Sep11-12, 08:59 AM||#20|
This conversation has gotten away from me a bit.
In an abstract, very generalised sense, are top-down and bottom-up approaches somewhat analogous to the following?:
Using the equation you posted previously, in a bottom-up approach you would speculate that a prey population would increase exponentially in the absence of predation, so there is a term to account for that. However, the population is also dependent on the number of predators, and the 'rate of predation' is dependent on the number of predators, but also the number of prey, and so there is a term to account for that aswell, and so on until equations are built to model the system. This is building an equation from some core assumptions, and then the equation(s) is(are) used to predict future outcomes given certain inputs (e.g. starting values).
A top-down approach on the other hand, would measure the population numbers at different points, look for patterns, and then come up with an equation that matches the pattern, and use this to make future predictions given some input values. In this case, the equation is built from trying to match the data, and isn't based on some 'fundamental' picture of what is happening: it isn't built from an understanding of underlying mechanisms, although it might give some insight in to what they are. Using the prey/predator example, the equation might come from attempting to match the data, and then by looking at idealised examples (e.g. no predator) it might be possible to 'break-up' the equation to see what it's doing, and one could deduce that one of the equations used to model the system suggests that the prey population would increase exponentially in the absence of predation, whereas this might not have been obvious before.
Hopefully that makes sense...
Any response appreciated.
|Sep11-12, 07:07 PM||#21|
So this is where the difficulty lies. With neuron firing, for instance, global attentive states bear down to shape the individual receptive fields. The global state may be constituted of this local firing, but at the same time, the local firing is shaped by the global state.
What models of this strongly emergent behaviour need to do is model the local elements as some set of degrees of freedom that can then be subject to emergent constraints. So the local elements are themselves dynamical (in ways the model accurately captures).
This would be the approach for instance of Grossberg's ART models, or Friston's Bayseian Brain.
A snapshot of a system at any point in time will indeed make it appear that the definite actions of the microscale are all that are causing the macroscale state. The causality is all bottom-up. But this is an artifact of taking such a restricted view. Systems and processes live in time, and holistic models would attempt to capture all the relevant spatiotemporal scales of action.
|Nov2-12, 03:09 AM||#22|
Paul Humphreys and Cyrille Imbert (eds.), Models, Simulations, and Representations, Routledge, 2012, ISBN 9780415891967.
And what you describe (in general) is anyway, under the scope of weak emergence (see the definitions from the above). The only difference between weak and strong emergence is strong emergence has extra philosophical baggage. Scientifically, there is no significance to calling your boundary conditions "causing". It is a matter of extensive vs. intensive properties that constitute emergence. A purely extensive system behaves only as a sum of its parts, but boundary conditions in spatiotemporal systems can have intensive properties and group behavior emerges that you would not get from a single member of the ensemble. There is no doubt that the boundary conditions (including the coupling term) affect the group behavior; they are everything about emergence for many network systems in nature.
As we are now, Madness was describing the model itself (top down vs. bottom up) and talking about the processes being modeled as is typical in Cognitive Sciences. That's independent of whether you use a top down vs. bottom up approach to the modeling.
i.e. you could model a system that displays both top-down and bottom-up processing using both top-down and bottom-up modeling methods. Which is why I conceded to Madness's statement that top-down bottom-up mean different things in different contexts, but I also noted that we were talking only about the modeling approach itself, not the thing being modeled, and so the context was already set.
But yes, pain is the canonical example they make of top-down processing in neuroscience classes (by being conscious of a wound, it can hurt more). Of course, it's not just top-down, it's more like bottom-up sensory being modulated by top-down focus. The easiest way to model this would be to use bottom-up processing to model the wound (a cut feels different than a bruise) and use focus/attention as a function (or weight) that modifies a term in the bottom-up process... i.e. multiply the pain based on how much attention focus is on it (and this would be the top-down portion of modeling).
Of course, you could also model the whole system bottom up (even the top-down process). An example would be an integrated circuit that receives three inputs. One at the eyes, and one at the skin and one from the memory banks. And the input from the eyes and the input from the skin would have to match an association process through the memory banks and produce an integrated response. This particular example is currently impossible as far as I know (too much and too little information at the same time) ... so the top-down modeling approach is often convenient for top-down considerations which makes it easy to conflate that top-down process = top-down modeling approach.
|Similar Threads for: Comp. Neuro. introductory textbooks|
|Good introductory E&M textbooks?||Science Textbook Discussion||10|
|Introductory EE textbooks||Electrical Engineering||9|
|Recomendations for Introductory Circuit Analysis Textbooks||Electrical Engineering||5|
|Introductory Pre-Calc or Calc textbooks||General Math||2|
|Why does BLS project a much greater demand for Comp Scientists than Comp Engineers?||Career Guidance||3|