Evidence for Globalized Consciousness

In summary: A guy named Hypnagogue who used to post here brought the claustrum to me attention once (he'd probably been reading about Crick) but this is the first I've heard of the precuneus being a suspect. It seems that two separate things are being discussed, though.
  • #1
Pythagorean
Gold Member
4,401
313
method said:
The Max Planck scientists in Tübingen led by Nikos Logothetis have now addressed this issue using electrophysiological methods to monitor the neural activity in the lateral prefrontal cortex of macaque monkeys during ambiguous visual stimulation. The visual stimuli used allow for multiple perceptual interpretations, even though the actual input remained the same. In doing so, Panagiotaropoulos and his team were able to show that the electrical activity monitored in the lateral prefrontal cortex correlates with what the macaque monkeys actually perceive.

conclusion said:
They thus concluded that visual awareness is not only reliably reflected in the temporal lobe, but also in the lateral prefrontal cortex of primates. The results depict that the neuronal correlates of consciousness are embedded in this area, which has a direct connection to premotor and motor areas of the brain, and is therefore able to directly affect motor output. These findings support the “frontal lobe hypothesis” of conscious visual perception established in 1995 by the researchers Crick (the co-discoverer of the structure of the DNA molecule) and Koch that awareness is related to neural activity with direct access to the planning stages of the brain.

new article (note, the above quote is from the news article, so subject to journalist interpretations):

http://neurosciencenews.com/conscious-perception-global-neural-networks-prefrontal-cortex/

peer reviewed article:

http://www.cell.com/neuron/abstract/S0896-6273(12)00380-7
 
Biology news on Phys.org
  • #3
Pythagorean said:

Summary

Neuronal discharges in the primate temporal lobe, but not in the striate and extrastriate cortex, reliably reflect stimulus awareness. However, it is not clear whether visual consciousness should be uniquely localized in the temporal association cortex. Here we used binocular flash suppression to investigate whether visual awareness is also explicitly reflected in feature-selective neural activity of the macaque lateral prefrontal cortex (LPFC), a cortical area reciprocally connected to the temporal lobe. We show that neuronal discharges in the majority of single units and recording sites in the LPFC follow the phenomenal perception of a preferred stimulus. Furthermore, visual awareness is reliably reflected in the power modulation of high-frequency (>50 Hz) local field potentials in sites where spiking activity is found to be perceptually modulated. Our results suggest that the activity of neuronal populations in at least two association cortical areas represents the content of conscious visual perception.
This is saying that visual consciousness is located in the temporal lobes and also in the lateral prefrontal cortex. I am not seeing how this supports the view it's global, as the article asserts it does. Some piece of that reasoning is missing as far as I can see.
 
  • #4
the two camps have been localized vs. global. Global really just means not localized. I.e. distributed... that there's no "seat of consciousness"
 
  • #5
Pythagorean said:
the two camps have been localized vs. global. Global really just means not localized. I.e. distributed... that there's no "seat of consciousness"
Where do the localizers localize it to, the thalamus?
 
  • #6
Last edited by a moderator:
  • #7
Pythagorean said:
I think the most famous example is Crick and Koch:

Crick and the claustrum:
http://www.nature.com/nature/journal/v435/n7045/full/4351040a.html?guid=on

But, I didn't think there's just one. I remember some noise being made about the precuneus:

The Precuneus:
http://www.ncbi.nlm.nih.gov/pubmed/17603406

There's lots of attention on the frontal lobes too.
A guy named Hypnagogue who used to post here brought the claustrum to me attention once (he'd probably been reading about Crick) but this is the first I've heard of the precuneus being a suspect. It seems that two separate things are being discussed, though. In your article the issue is consciousness of the visual field. The precuneus is asserted to have bearing on sense of self. These could both be true and non-contradictory.
 
Last edited by a moderator:
  • #8
I also think we need to remember that localising the "seat of consciousness" is a bit of a copout. We have localised it to the brain so far but we still have no clue how it works. Localising it to an area of the brain would be much the same. It would certainly be interesting and would direct further research, but it wouldn't be much of an answer to the real mystery of consciousness.
 
  • #9
madness said:
I also think we need to remember that localising the "seat of consciousness" is a bit of a copout. We have localised it to the brain so far but we still have no clue how it works. Localising it to an area of the brain would be much the same. It would certainly be interesting and would direct further research, but it wouldn't be much of an answer to the real mystery of consciousness.
It would only be a copout if you stopped there and thought you'd explained consciousness. If it is localized, though, you won't be able to explain anything until you find the location and analyze what's happening there.
 
  • #10
zoobyshoe said:
It seems that two separate things are being discussed, though. In your article the issue is consciousness of the visual field.

You'd have to have accepted distributed consciousness already to see consciousness as something that can be divided functionally like that :)
 
  • #11
I think that brings up a nuance in the distributed/localised debate. Different aspects could be localised in different brain areas, but there is a stronger sense of distribution in which no aspect of consciousness might be localised any single brain area. This would be the case if visual consciousness was itself distributed throughout the brain, and so was the sense of self etc. I would probably favour the stronger form of distributed consciousness. Bear in mind also that our overall consciousness seems take take place after the integration of the various senses and the sense of self etc.
 
  • #12
Pythagorean said:
You'd have to have accepted distributed consciousness already to see consciousness as something that can be divided functionally like that :)

You bastard! Saw right through me! :)
 
  • #13
madness said:
I think that brings up a nuance in the distributed/localised debate. Different aspects could be localised in different brain areas, but there is a stronger sense of distribution in which no aspect of consciousness might be localised any single brain area. This would be the case if visual consciousness was itself distributed throughout the brain, and so was the sense of self etc. I would probably favour the stronger form of distributed consciousness. Bear in mind also that our overall consciousness seems take take place after the integration of the various senses and the sense of self etc.
I don't know if there are any "schools of thought" that assert this, but my own notion is that consciousness is a collection of tiny parts, pixels as it were. A firing neuron may be the equivalent of a pixel. Groups of neurons that fire in response to the same things, say stimulation by light wavelength (color perception), constitute a kind and level of consciousness unto themselves. Were there some primitive life form that only had this kind of neuron (and accompanying receptors) we could call it conscious, but it would, of course, be a very limited consciousness compared to ours.

By this reasoning, if you cover an eye with your hand you are subtracting some consciousness. Close both eyes, you subtract more. Go to sleep and you subtract a very large chunk.

Someone could easily consider this simplistic, I'm aware. It ignores the whole issue of "intelligence" among other things.
 
  • #14
I would tend to side with the predictive models of perception (and by extension consciousness). These theories model perception as the result of an internal, heirarchical, top down set of predictions which attempt to "explain away" the incoming sense data. In this way, a group of neurons don't fire in response stimulation by a wavelength of light in a bottum up constructive manner. In fact, neural signals should represent the error between prediction and incoming data according to these models.

To me this seems to provide a more integrated and less atomistic view of consciousness. I also think it is more in line with the fact that we are conscious at all. A passive, bottom up filter between sensory input and motor output would intuitively seem less conscious than a system which generates an internal model of the external world and attempts to predict the incoming stream of sense data.
 
  • #15
zoobyshoe said:
Someone could easily consider this simplistic, I'm aware. It ignores the whole issue of "intelligence" among other things.

Yes, this is simplistic. :smile:

It is not a case of local vs global but about how the two things work together. So "a state of conscious awareness" is both a highly particular state (the brain is intent on some single focal view) and also a highly general state (at the same time, it is actively pushing everything else it "knows" into the background).

So where it comes to neural firings, the dog that didn't bark, my dear Watson, is also part of the state.

All neurons are firing off all the time. Unless they are dead. But firing rates get suppressed, become unsynchronised, etc, if the brain wants to push "what they are saying" into the background.

This is why some theories of consciousness (like one of Crick's multiple attempts to locate a seat of awareness) stress the thalamus, a crucial bottleneck in orchestrating what cortical activity is currently being enhanced, and what suppressed.

But the prefrontal is also very active in this same orchestration (reaching down to the thalamus to control it) so can also seem to be a seat of consciousness (of the spotlight of attention).

So a better mental image is to think of the brain as a buzzing confusion of neural rustling - the kind of day-dreamy unfocused awareness you get when idling. Which can then be kicked into states of high-contrast representation by turning up some neurons/circuits, and suppressing the contributions of others.

In this way, consciousness is the result of local~global differentiation and integration. The whole of the brain is always involved. But locally, some activity is ramped up, other activity suppressed. And the dogs that don't bark count along with the dogs that do so far as the final experience goes.

This explains stuff like priming or the fact you only notice the hum of the fridge when it switches off. Your brain was representing the hum, but it was also suppressed to the level when it was not part of your focal spotlight. But the sudden absence of a now expected part of your mental background is the dog that stopped barking. And so the brain shifts its focus to find out what happened exactly.
 
  • #16
apeiron said:
Yes, this is simplistic. :smile:

It is not a case of local vs global but about how the two things work together. So "a state of conscious awareness" is both a highly particular state (the brain is intent on some single focal view) and also a highly general state (at the same time, it is actively pushing everything else it "knows" into the background).

So where it comes to neural firings, the dog that didn't bark, my dear Watson, is also part of the state.

All neurons are firing off all the time. Unless they are dead. But firing rates get suppressed, become unsynchronised, etc, if the brain wants to push "what they are saying" into the background.

This is why some theories of consciousness (like one of Crick's multiple attempts to locate a seat of awareness) stress the thalamus, a crucial bottleneck in orchestrating what cortical activity is currently being enhanced, and what suppressed.

But the prefrontal is also very active in this same orchestration (reaching down to the thalamus to control it) so can also seem to be a seat of consciousness (of the spotlight of attention).

So a better mental image is to think of the brain as a buzzing confusion of neural rustling - the kind of day-dreamy unfocused awareness you get when idling. Which can then be kicked into states of high-contrast representation by turning up some neurons/circuits, and suppressing the contributions of others.

In this way, consciousness is the result of local~global differentiation and integration. The whole of the brain is always involved. But locally, some activity is ramped up, other activity suppressed. And the dogs that don't bark count along with the dogs that do so far as the final experience goes.

This explains stuff like priming or the fact you only notice the hum of the fridge when it switches off. Your brain was representing the hum, but it was also suppressed to the level when it was not part of your focal spotlight. But the sudden absence of a now expected part of your mental background is the dog that stopped barking. And so the brain shifts its focus to find out what happened exactly.
This all makes sense, and it describes what my mind always seems to be up to, but it seems you are equating attention with consciousness. I would class them as two separate considerations.
 
  • #17
zoobyshoe said:
This all makes sense, and it describes what my mind always seems to be up to, but it seems you are equating attention with consciousness. I would class them as two separate considerations.

No, it would be the sucessfully attended plus the successfully ignored that adds up to some particular state of consciousness in this view. If there is a spotlight of attention, then by the same token there is the penumbra of all that is not being attended - or rather actively suppressed.

So attention leads to what is reportable. And that can seem like all that matters. But you may know from fine arts 101 that they start you off by focusing on negative space so that you "really see things". You can't have the light without the shade, the object without the context. So you have to step back and learn to clearly see all the things you had learned to ignore.
 
  • #18
madness said:
I would tend to side with the predictive models of perception (and by extension consciousness). These theories model perception as the result of an internal, heirarchical, top down set of predictions which attempt to "explain away" the incoming sense data. In this way, a group of neurons don't fire in response stimulation by a wavelength of light in a bottum up constructive manner. In fact, neural signals should represent the error between prediction and incoming data according to these models.

To me this seems to provide a more integrated and less atomistic view of consciousness. I also think it is more in line with the fact that we are conscious at all. A passive, bottom up filter between sensory input and motor output would intuitively seem less conscious than a system which generates an internal model of the external world and attempts to predict the incoming stream of sense data.
This all makes sense and is plausible, but it doesn't particularly explain why we would become conscious of the difference between prediction and data stream, or why that difference would become conscious of itself. How is that difference manifested in neuronal activity? Suppose you can tell me exactly: the predicting neurons do this and the data neurons do that, and there's this difference between the two. How does that difference become a conscious experience while the other activity remains unconscious?
 
  • #19
zoobyshoe said:
Suppose you can tell me exactly: the predicting neurons do this and the data neurons do that, and there's this difference between the two.

There aren't these two types of neurons. Instead the brain generates a predictive state in the neurons - a mental image of what is likely to happen - and then what happens gets matched against the prediction.

This happens all the way down to the "input" level as shown by the way retinal ganglion cells fire off ahead of an expected event.

Markus Meister, Ph.D., a professor of molecular and cellular biology at Harvard University, studies this anticipatory mechanism and other aspects of signal processing in the retina. In a paper in the journal Nature, he and his colleagues showed how nerve cells in the retina respond most actively to the leading edge of a moving object. They also fire slightly ahead of it.

http://www.bmesphotos.org/WhitakerArchives/00_annual_report/meister.html
 
Last edited by a moderator:
  • #20
apeiron said:
There aren't these two types of neurons. Instead the brain generates a predictive state in the neurons - a mental image of what is likely to happen - and then what happens gets matched against the prediction.

This happens all the way down to the "input" level as shown by the way retinal ganglion cells fire off ahead of an expected event.
OK, it's the same neurons experiencing both the prediction and the actual data. Same question though: how is the difference between the two transformed into "consciousness"?

apeiron said:
No, it would be the sucessfully attended plus the successfully ignored that adds up to some particular state of consciousness in this view. If there is a spotlight of attention, then by the same token there is the penumbra of all that is not being attended - or rather actively suppressed.

So attention leads to what is reportable. And that can seem like all that matters. But you may know from fine arts 101 that they start you off by focusing on negative space so that you "really see things". You can't have the light without the shade, the object without the context. So you have to step back and learn to clearly see all the things you had learned to ignore.
I saw Ramachandran on TV many years back pointing this out: we are only conscious of differences. That stuck with me.

But this leads to the same question I asked madness. How does difference become consciousness? Why doesn't a table become conscious when I press on it as I do? The table experiences just as much difference in pressure. I have neurons firing and the table doesn't. There has to be something unique about the firing of a neuron that it leads to me becoming conscious of the table while the table remains unconscious of me. That's my reasoning.

I'm guess I'm betraying a prejudice for EM activity as the root of consciousness because consciousness seems to demand something dynamic and "electrical".
 
  • #21
zoobyshoe said:
OK, it's the same neurons experiencing both the prediction and the actual data. Same question though: how is the difference between the two transformed into "consciousness"?

But you are conscious of your predictions too. So no transformation is necessary. A strong state of sensory prediction is what you would call a mental image. Or a memory.

If I asked you to remember exactly what you had for breakfast this morning, you would conjure up a scene in your head. This would be you anticipating what it would be like to be actually back at that moment, using all your stored knowledge to generate a predictive state.

zoobyshoe said:
But this leads to the same question I asked madness. How does difference become consciousness? Why doesn't a table become conscious when I press on it as I do? The table experiences just as much difference in pressure. I have neurons firing and the table doesn't. There has to be something unique about the firing of a neuron that it leads to me becoming conscious of the table while the table remains unconscious of me. That's my reasoning.

I'm guess I'm betraying a prejudice for EM activity as the root of consciousness because consciousness seems to demand something dynamic and "electrical".

The firing of a neuron is just a depolarisation spike that goes somewhere. So it is the patterns being spun that creates an organised state of representation. You don't need any special material or energetic property associated with the spike. It is the web of connections that is dynamical.

It is a bit like the molecules that make a wave. The wave crashes on the shore. The molecules just bob up and down where they are. Consciousness would equate to the wave and spiking neurons to the bobbing molecules - as a rough analogy.

Another simplified way of describing it is that the brain wraps itself around the shape of things. So rather than the analogy of a screen display that must be viewed, you have a sculpting that is the experience being lived.
 
  • #22
zoobyshoe said:
This all makes sense and is plausible, but it doesn't particularly explain why we would become conscious of the difference between prediction and data stream, or why that difference would become conscious of itself. How is that difference manifested in neuronal activity? Suppose you can tell me exactly: the predicting neurons do this and the data neurons do that, and there's this difference between the two. How does that difference become a conscious experience while the other activity remains unconscious?

This is exactly the point. We are conscious of our own internal model of the external world, not the incoming data stream or the error signal. Sometimes this model is wrong, and abruptly changed (e.g. multistable perception). Consciousness starts at the top of the heirarchy and constantly updates itself by propagation down predictions and back up errors which it uses to update its internal model.

Of course I can't explain why any system would actually become conscious. It just seems that one with an internal model of reality should be more aware of reality than one which functions as a relay between input and output.
 
  • #23
From the snippets Pythagorean posted in his OP, there doesn't seem to be a requirement for any "internal model" in this task, is there? If so, then the definition of "visual awareness" is probably different from a definition of "consciousness" that requires it.
 
  • #24
madness said:
This is exactly the point. We are conscious of our own internal model of the external world, not the incoming data stream or the error signal. Sometimes this model is wrong, and abruptly changed (e.g. multistable perception). Consciousness starts at the top of the heirarchy and constantly updates itself by propagation down predictions and back up errors which it uses to update its internal model.

Of course I can't explain why any system would actually become conscious. It just seems that one with an internal model of reality should be more aware of reality than one which functions as a relay between input and output.
Let me tackle it from a different direction: you pointed out in the other thread that electrical stimulation of an area of the parietal lobes causes the experience of phosphenes. In addition to that, transcranial magnetic stimulation of the primary visual cortex causes phosphenes. They performed this test on both normal people and migraineurs and determined that it takes much less stimulation to cause phosphenes in the latter: they have touchier neurons, so to speak.

At any rate, these tests, plus the fact of abstract visual auras in cortical spreading depression, just about prove that the firing of neurons causes elemental experiences. A person having a visual migraine aura can literally see the wave of hyperexited neuronal firing slowly creeping across their visual field, leaving the visual "hole" of depressed neurons in its wake. This visual experience has nothing to do with an external data stream or the difference between an internal model and a data stream. It is the raw experience of neurons firing. The light sensitive neurons firing in the absence of light non-the-less create the experience of light.

Any internal model of an external visual scene we are looking at must, therefore, be composed of light sensitive neurons whose firing is organized according to an external data stream. We're not conscious of the data stream, but of the firing neurons that create the experience of light in response to the data stream.

With other senses the conscious experience, the qualia, are different: a tactile neuron firing creates a little "atom" or "pixel" or "quale" of the sensation of skin pressure. An olfactory neuron: the sensation of smell. An auditory neuron: the sensation of sound. According to the Brodmann map.

You get the same hyper-activation of raw sensory experiences in simple partial seizures. The hypersynchronous firing of a population of sensory neurons will create an intense experience of a particular sense: the flashing of the visual field as if the person has a police flasher trained on them, a sudden roaring or loud buzzing sound, a incredibly intense, unpleasant smell. There are even proprioceptive simple partial seizures where a person is attacked by the uncanny sensation a limb is in a different position than it actually is: one man felt his arm was raised over his head despite the fact he could see it at his side, and another person, whose foot was quite relaxed, felt her toes were curled up underneath. The sensory neurons were firing independently of any external data. The firing of the sensory neurons creates the sensory experience. When the firing is hypersynchronous the sensory experience is overwhelmingly intense, and when the firing is erratic and chaotic so is the sensory experience.

I have to conclude conscious experience is caused by the firing of neurons, that the firing neuron, itself, is what is conscious of the experience it is dedicated to experience. However you make the neuron fire, whether it be in response to objective external data, or by the induction avalanche of a seizure or migrainous wave, the experience happens.

The things you and Apeiron speak about are also happening and necessary, but they are higher functions: integration, memory, planning, thinking. They are vastly more complex. The thing I'm addressing is the primal, unprocessed sensory experience, not a sense of self as an autonomous being, or anything like that.
 
  • #25
atyy said:
From the snippets Pythagorean posted in his OP, there doesn't seem to be a requirement for any "internal model" in this task, is there? If so, then the definition of "visual awareness" is probably different from a definition of "consciousness" that requires it.

I don't see the function of the anticipatory firing as being to create consciousness. It's just
practical to predict that things will obey Newton I. Give you an edge in case you have to react.
 
  • #26
atyy said:
From the snippets Pythagorean posted in his OP, there doesn't seem to be a requirement for any "internal model" in this task, is there? If so, then the definition of "visual awareness" is probably different from a definition of "consciousness" that requires it.

I don't think there's really a way to evaluate, test, or quantify "consciousness". It would be a lot like talking about "electromagnetism". You can't really quantify that word, as it's an over-arching concept. You can quantify specific quantities and properties within electromagnetism (charge, skin depth, field potential) but the concept "electromagnetism" is a huge grab-bag of intuitively associated concepts.

"Consciousness" has the same problem. We can talk about and even quantify specific aspects of it, but saying anything about all consciousness is not very rigorous; save it for the discussion and introduction section of a paper. It's a huge patchwork of ideas, observations, and personal connotation.
 
  • #27
Pythagorean said:
It's a huge patchwork of ideas, observations, and personal connotation.
Which is why it's such a great conversation/debate generator.
 
  • #28
atyy said:
From the snippets Pythagorean posted in his OP, there doesn't seem to be a requirement for any "internal model" in this task, is there? If so, then the definition of "visual awareness" is probably different from a definition of "consciousness" that requires it.

fMRI studies of these tasks show that perceptual switches are instigated by activity in the prefrontal cortex, indicating that a top down influence does cause the reordering of activity in the visual cortex as expected from predictive coding models. It has been mostly disproven that perceptual bistability arises from bottom-up rivalry in the early stages of visual processing alone.

The fact that there is a percept which varies independently of the stimulus to me implies that there must be an internal model. The only question is whether is constructed in sequential processing stages from the sensory input or whether some form of predictive coding is involved.
 
  • #29
zoobyshoe said:
The things you and Apeiron speak about are also happening and necessary, but they are higher functions: integration, memory, planning, thinking. They are vastly more complex. The thing I'm addressing is the primal, unprocessed sensory experience, not a sense of self as an autonomous being, or anything like that.

But your "visual neuron" will be getting more top-down inputs from higher levels than it gets coming up as its "data stream".

There just is no such thing as primary, unprocessed experience - even with phosphenes or form constants. You know from the literature that people have to learn to see them, to attend to them, to be able to predict their presence and so find them.
 
  • #30
There's also regulatory functions on the metabolic level that will drive, regulate, and modulate regions of networks in a more global manner, based on environmental cues, internal molecular clocks, toxic side-effects, drug use, genetic expression, etc.

For instance, I've posted the research on fabp7 and psd-90 trafficking during night-time shift, indicating a major overhaul of synaptic connectivity in the hippocampus. These kind of influences that cause global modulation in a region or whole nucleus are bound to change the spectral intensity of perceptive input.
 
  • #31
apeiron said:
You know from the literature that people have to learn to see them, to attend to them, to be able to predict their presence and so find them.
I don't know what you mean by this.

For example, a couple years ago Evo posted reporting the startling experience of a visual migraine aura. She had never experienced this before, or heard of it, and had no idea what was happening.

When and where did she "learn" to predict the presence of it and look for it?
 
  • #32
apeiron said:
But your "visual neuron" will be getting more top-down inputs from higher levels than it gets coming up as its "data stream".
Under normal circumstances. In the case of seizure and migraine or direct electrical or magnetic stimulation, I have to suppose all normal inputs from above or below are rendered moot, the neuron isn't firing in response to them. According to what I've read, during seizures the neurons are setting each other off by induction, not through synaptic connections.
 
  • #33
zoobyshoe said:
I don't know what you mean by this.

For example, a couple years ago Evo posted reporting the startling experience of a visual migraine aura. She had never experienced this before, or heard of it, and had no idea what was happening.

When and where did she "learn" to predict the presence of it and look for it?

And I just happened to have my first ever visual migraine just a few weeks back. It took me a little while to figure out it was there. I was thinking one eye seemed a little tired. The TV picture was not quite right. Then I eventually realized I was "seeing" a wriggling curve of lights - which for about half an hour I had just been trying to look around and ignore.

As soon as I thought, "that could be a visual migraine", that is when I could really focus on it and experience it as an object in itself. I could relate it to pictures that people had tried to draw of them.

So even with such a startling kind of "bottom-up, data driven" phenomenon, I found that top-down attention and expectation had to come into play for the sensation to be consciously reportable.
 
  • #34
zoobyshoe said:
Under normal circumstances. In the case of seizure and migraine or direct electrical or magnetic stimulation, I have to suppose all normal inputs from above or below are rendered moot, the neuron isn't firing in response to them. According to what I've read, during seizures the neurons are setting each other off by induction, not through synaptic connections.

If a seizure does indeed render the normal organised connectedness of the brain moot, then the result is...unconsciousness.
 
  • #35
madness said:
fMRI studies of these tasks show that perceptual switches are instigated by activity in the prefrontal cortex, indicating that a top down influence does cause the reordering of activity in the visual cortex as expected from predictive coding models. It has been mostly disproven that perceptual bistability arises from bottom-up rivalry in the early stages of visual processing alone.

The fact that there is a percept which varies independently of the stimulus to me implies that there must be an internal model. The only question is whether is constructed in sequential processing stages from the sensory input or whether some form of predictive coding is involved.

In the paper of the OP, isn't there a unique percept to each stimulus, although maybe not a unique stimulus to each percept? So it seems to be simply a many-to-one mapping. Why is an internal model required?

Even in the case where there are "top-down" influences, couldn't that just be apparent stochasticity due to long-transients from previous "bottom up" stimuli? Again, no "internal model" seems to be needed.

Example of long transients
http://arxiv.org/abs/cond-mat/0603154
http://arxiv.org/abs/0705.3214
 
Last edited:

Similar threads

Replies
1
Views
698
  • Biology and Medical
Replies
9
Views
2K
  • General Discussion
Replies
4
Views
2K
  • Biology and Medical
Replies
19
Views
7K
  • General Discussion
Replies
2
Views
2K
  • Biology and Medical
Replies
2
Views
11K
  • General Discussion
Replies
11
Views
25K
Replies
13
Views
6K
Back
Top