Evidence for Globalized Consciousness

AI Thread Summary
Recent research by Max Planck scientists has demonstrated that visual awareness in macaque monkeys is reflected in the lateral prefrontal cortex (LPFC) as well as the temporal lobe, suggesting a more complex understanding of consciousness. The study utilized ambiguous visual stimuli to show that neural activity in the LPFC correlates with the monkeys' perceptions, supporting the "frontal lobe hypothesis" of conscious visual perception. This finding indicates that consciousness may not be localized to a single area but rather involves multiple brain regions working together. The discussion highlights the ongoing debate between localized and distributed theories of consciousness, emphasizing that both perspectives may coexist. Overall, the integration of neural activity across different cortical areas is crucial for understanding the nature of conscious experience.
Pythagorean
Science Advisor
Messages
4,416
Reaction score
327
method said:
The Max Planck scientists in Tübingen led by Nikos Logothetis have now addressed this issue using electrophysiological methods to monitor the neural activity in the lateral prefrontal cortex of macaque monkeys during ambiguous visual stimulation. The visual stimuli used allow for multiple perceptual interpretations, even though the actual input remained the same. In doing so, Panagiotaropoulos and his team were able to show that the electrical activity monitored in the lateral prefrontal cortex correlates with what the macaque monkeys actually perceive.

conclusion said:
They thus concluded that visual awareness is not only reliably reflected in the temporal lobe, but also in the lateral prefrontal cortex of primates. The results depict that the neuronal correlates of consciousness are embedded in this area, which has a direct connection to premotor and motor areas of the brain, and is therefore able to directly affect motor output. These findings support the “frontal lobe hypothesis” of conscious visual perception established in 1995 by the researchers Crick (the co-discoverer of the structure of the DNA molecule) and Koch that awareness is related to neural activity with direct access to the planning stages of the brain.

new article (note, the above quote is from the news article, so subject to journalist interpretations):

http://neurosciencenews.com/conscious-perception-global-neural-networks-prefrontal-cortex/

peer reviewed article:

http://www.cell.com/neuron/abstract/S0896-6273(12)00380-7
 
Biology news on Phys.org
Pythagorean said:

Summary

Neuronal discharges in the primate temporal lobe, but not in the striate and extrastriate cortex, reliably reflect stimulus awareness. However, it is not clear whether visual consciousness should be uniquely localized in the temporal association cortex. Here we used binocular flash suppression to investigate whether visual awareness is also explicitly reflected in feature-selective neural activity of the macaque lateral prefrontal cortex (LPFC), a cortical area reciprocally connected to the temporal lobe. We show that neuronal discharges in the majority of single units and recording sites in the LPFC follow the phenomenal perception of a preferred stimulus. Furthermore, visual awareness is reliably reflected in the power modulation of high-frequency (>50 Hz) local field potentials in sites where spiking activity is found to be perceptually modulated. Our results suggest that the activity of neuronal populations in at least two association cortical areas represents the content of conscious visual perception.
This is saying that visual consciousness is located in the temporal lobes and also in the lateral prefrontal cortex. I am not seeing how this supports the view it's global, as the article asserts it does. Some piece of that reasoning is missing as far as I can see.
 
the two camps have been localized vs. global. Global really just means not localized. I.e. distributed... that there's no "seat of consciousness"
 
Pythagorean said:
the two camps have been localized vs. global. Global really just means not localized. I.e. distributed... that there's no "seat of consciousness"
Where do the localizers localize it to, the thalamus?
 
Last edited by a moderator:
Pythagorean said:
I think the most famous example is Crick and Koch:

Crick and the claustrum:
http://www.nature.com/nature/journal/v435/n7045/full/4351040a.html?guid=on

But, I didn't think there's just one. I remember some noise being made about the precuneus:

The Precuneus:
http://www.ncbi.nlm.nih.gov/pubmed/17603406

There's lots of attention on the frontal lobes too.
A guy named Hypnagogue who used to post here brought the claustrum to me attention once (he'd probably been reading about Crick) but this is the first I've heard of the precuneus being a suspect. It seems that two separate things are being discussed, though. In your article the issue is consciousness of the visual field. The precuneus is asserted to have bearing on sense of self. These could both be true and non-contradictory.
 
Last edited by a moderator:
I also think we need to remember that localising the "seat of consciousness" is a bit of a copout. We have localised it to the brain so far but we still have no clue how it works. Localising it to an area of the brain would be much the same. It would certainly be interesting and would direct further research, but it wouldn't be much of an answer to the real mystery of consciousness.
 
madness said:
I also think we need to remember that localising the "seat of consciousness" is a bit of a copout. We have localised it to the brain so far but we still have no clue how it works. Localising it to an area of the brain would be much the same. It would certainly be interesting and would direct further research, but it wouldn't be much of an answer to the real mystery of consciousness.
It would only be a copout if you stopped there and thought you'd explained consciousness. If it is localized, though, you won't be able to explain anything until you find the location and analyze what's happening there.
 
  • #10
zoobyshoe said:
It seems that two separate things are being discussed, though. In your article the issue is consciousness of the visual field.

You'd have to have accepted distributed consciousness already to see consciousness as something that can be divided functionally like that :)
 
  • #11
I think that brings up a nuance in the distributed/localised debate. Different aspects could be localised in different brain areas, but there is a stronger sense of distribution in which no aspect of consciousness might be localised any single brain area. This would be the case if visual consciousness was itself distributed throughout the brain, and so was the sense of self etc. I would probably favour the stronger form of distributed consciousness. Bear in mind also that our overall consciousness seems take take place after the integration of the various senses and the sense of self etc.
 
  • #12
Pythagorean said:
You'd have to have accepted distributed consciousness already to see consciousness as something that can be divided functionally like that :)

You bastard! Saw right through me! :)
 
  • #13
madness said:
I think that brings up a nuance in the distributed/localised debate. Different aspects could be localised in different brain areas, but there is a stronger sense of distribution in which no aspect of consciousness might be localised any single brain area. This would be the case if visual consciousness was itself distributed throughout the brain, and so was the sense of self etc. I would probably favour the stronger form of distributed consciousness. Bear in mind also that our overall consciousness seems take take place after the integration of the various senses and the sense of self etc.
I don't know if there are any "schools of thought" that assert this, but my own notion is that consciousness is a collection of tiny parts, pixels as it were. A firing neuron may be the equivalent of a pixel. Groups of neurons that fire in response to the same things, say stimulation by light wavelength (color perception), constitute a kind and level of consciousness unto themselves. Were there some primitive life form that only had this kind of neuron (and accompanying receptors) we could call it conscious, but it would, of course, be a very limited consciousness compared to ours.

By this reasoning, if you cover an eye with your hand you are subtracting some consciousness. Close both eyes, you subtract more. Go to sleep and you subtract a very large chunk.

Someone could easily consider this simplistic, I'm aware. It ignores the whole issue of "intelligence" among other things.
 
  • #14
I would tend to side with the predictive models of perception (and by extension consciousness). These theories model perception as the result of an internal, heirarchical, top down set of predictions which attempt to "explain away" the incoming sense data. In this way, a group of neurons don't fire in response stimulation by a wavelength of light in a bottum up constructive manner. In fact, neural signals should represent the error between prediction and incoming data according to these models.

To me this seems to provide a more integrated and less atomistic view of consciousness. I also think it is more in line with the fact that we are conscious at all. A passive, bottom up filter between sensory input and motor output would intuitively seem less conscious than a system which generates an internal model of the external world and attempts to predict the incoming stream of sense data.
 
  • #15
zoobyshoe said:
Someone could easily consider this simplistic, I'm aware. It ignores the whole issue of "intelligence" among other things.

Yes, this is simplistic. :smile:

It is not a case of local vs global but about how the two things work together. So "a state of conscious awareness" is both a highly particular state (the brain is intent on some single focal view) and also a highly general state (at the same time, it is actively pushing everything else it "knows" into the background).

So where it comes to neural firings, the dog that didn't bark, my dear Watson, is also part of the state.

All neurons are firing off all the time. Unless they are dead. But firing rates get suppressed, become unsynchronised, etc, if the brain wants to push "what they are saying" into the background.

This is why some theories of consciousness (like one of Crick's multiple attempts to locate a seat of awareness) stress the thalamus, a crucial bottleneck in orchestrating what cortical activity is currently being enhanced, and what suppressed.

But the prefrontal is also very active in this same orchestration (reaching down to the thalamus to control it) so can also seem to be a seat of consciousness (of the spotlight of attention).

So a better mental image is to think of the brain as a buzzing confusion of neural rustling - the kind of day-dreamy unfocused awareness you get when idling. Which can then be kicked into states of high-contrast representation by turning up some neurons/circuits, and suppressing the contributions of others.

In this way, consciousness is the result of local~global differentiation and integration. The whole of the brain is always involved. But locally, some activity is ramped up, other activity suppressed. And the dogs that don't bark count along with the dogs that do so far as the final experience goes.

This explains stuff like priming or the fact you only notice the hum of the fridge when it switches off. Your brain was representing the hum, but it was also suppressed to the level when it was not part of your focal spotlight. But the sudden absence of a now expected part of your mental background is the dog that stopped barking. And so the brain shifts its focus to find out what happened exactly.
 
  • #16
apeiron said:
Yes, this is simplistic. :smile:

It is not a case of local vs global but about how the two things work together. So "a state of conscious awareness" is both a highly particular state (the brain is intent on some single focal view) and also a highly general state (at the same time, it is actively pushing everything else it "knows" into the background).

So where it comes to neural firings, the dog that didn't bark, my dear Watson, is also part of the state.

All neurons are firing off all the time. Unless they are dead. But firing rates get suppressed, become unsynchronised, etc, if the brain wants to push "what they are saying" into the background.

This is why some theories of consciousness (like one of Crick's multiple attempts to locate a seat of awareness) stress the thalamus, a crucial bottleneck in orchestrating what cortical activity is currently being enhanced, and what suppressed.

But the prefrontal is also very active in this same orchestration (reaching down to the thalamus to control it) so can also seem to be a seat of consciousness (of the spotlight of attention).

So a better mental image is to think of the brain as a buzzing confusion of neural rustling - the kind of day-dreamy unfocused awareness you get when idling. Which can then be kicked into states of high-contrast representation by turning up some neurons/circuits, and suppressing the contributions of others.

In this way, consciousness is the result of local~global differentiation and integration. The whole of the brain is always involved. But locally, some activity is ramped up, other activity suppressed. And the dogs that don't bark count along with the dogs that do so far as the final experience goes.

This explains stuff like priming or the fact you only notice the hum of the fridge when it switches off. Your brain was representing the hum, but it was also suppressed to the level when it was not part of your focal spotlight. But the sudden absence of a now expected part of your mental background is the dog that stopped barking. And so the brain shifts its focus to find out what happened exactly.
This all makes sense, and it describes what my mind always seems to be up to, but it seems you are equating attention with consciousness. I would class them as two separate considerations.
 
  • #17
zoobyshoe said:
This all makes sense, and it describes what my mind always seems to be up to, but it seems you are equating attention with consciousness. I would class them as two separate considerations.

No, it would be the sucessfully attended plus the successfully ignored that adds up to some particular state of consciousness in this view. If there is a spotlight of attention, then by the same token there is the penumbra of all that is not being attended - or rather actively suppressed.

So attention leads to what is reportable. And that can seem like all that matters. But you may know from fine arts 101 that they start you off by focusing on negative space so that you "really see things". You can't have the light without the shade, the object without the context. So you have to step back and learn to clearly see all the things you had learned to ignore.
 
  • #18
madness said:
I would tend to side with the predictive models of perception (and by extension consciousness). These theories model perception as the result of an internal, heirarchical, top down set of predictions which attempt to "explain away" the incoming sense data. In this way, a group of neurons don't fire in response stimulation by a wavelength of light in a bottum up constructive manner. In fact, neural signals should represent the error between prediction and incoming data according to these models.

To me this seems to provide a more integrated and less atomistic view of consciousness. I also think it is more in line with the fact that we are conscious at all. A passive, bottom up filter between sensory input and motor output would intuitively seem less conscious than a system which generates an internal model of the external world and attempts to predict the incoming stream of sense data.
This all makes sense and is plausible, but it doesn't particularly explain why we would become conscious of the difference between prediction and data stream, or why that difference would become conscious of itself. How is that difference manifested in neuronal activity? Suppose you can tell me exactly: the predicting neurons do this and the data neurons do that, and there's this difference between the two. How does that difference become a conscious experience while the other activity remains unconscious?
 
  • #19
zoobyshoe said:
Suppose you can tell me exactly: the predicting neurons do this and the data neurons do that, and there's this difference between the two.

There aren't these two types of neurons. Instead the brain generates a predictive state in the neurons - a mental image of what is likely to happen - and then what happens gets matched against the prediction.

This happens all the way down to the "input" level as shown by the way retinal ganglion cells fire off ahead of an expected event.

Markus Meister, Ph.D., a professor of molecular and cellular biology at Harvard University, studies this anticipatory mechanism and other aspects of signal processing in the retina. In a paper in the journal Nature, he and his colleagues showed how nerve cells in the retina respond most actively to the leading edge of a moving object. They also fire slightly ahead of it.

http://www.bmesphotos.org/WhitakerArchives/00_annual_report/meister.html
 
Last edited by a moderator:
  • #20
apeiron said:
There aren't these two types of neurons. Instead the brain generates a predictive state in the neurons - a mental image of what is likely to happen - and then what happens gets matched against the prediction.

This happens all the way down to the "input" level as shown by the way retinal ganglion cells fire off ahead of an expected event.
OK, it's the same neurons experiencing both the prediction and the actual data. Same question though: how is the difference between the two transformed into "consciousness"?

apeiron said:
No, it would be the sucessfully attended plus the successfully ignored that adds up to some particular state of consciousness in this view. If there is a spotlight of attention, then by the same token there is the penumbra of all that is not being attended - or rather actively suppressed.

So attention leads to what is reportable. And that can seem like all that matters. But you may know from fine arts 101 that they start you off by focusing on negative space so that you "really see things". You can't have the light without the shade, the object without the context. So you have to step back and learn to clearly see all the things you had learned to ignore.
I saw Ramachandran on TV many years back pointing this out: we are only conscious of differences. That stuck with me.

But this leads to the same question I asked madness. How does difference become consciousness? Why doesn't a table become conscious when I press on it as I do? The table experiences just as much difference in pressure. I have neurons firing and the table doesn't. There has to be something unique about the firing of a neuron that it leads to me becoming conscious of the table while the table remains unconscious of me. That's my reasoning.

I'm guess I'm betraying a prejudice for EM activity as the root of consciousness because consciousness seems to demand something dynamic and "electrical".
 
  • #21
zoobyshoe said:
OK, it's the same neurons experiencing both the prediction and the actual data. Same question though: how is the difference between the two transformed into "consciousness"?

But you are conscious of your predictions too. So no transformation is necessary. A strong state of sensory prediction is what you would call a mental image. Or a memory.

If I asked you to remember exactly what you had for breakfast this morning, you would conjure up a scene in your head. This would be you anticipating what it would be like to be actually back at that moment, using all your stored knowledge to generate a predictive state.

zoobyshoe said:
But this leads to the same question I asked madness. How does difference become consciousness? Why doesn't a table become conscious when I press on it as I do? The table experiences just as much difference in pressure. I have neurons firing and the table doesn't. There has to be something unique about the firing of a neuron that it leads to me becoming conscious of the table while the table remains unconscious of me. That's my reasoning.

I'm guess I'm betraying a prejudice for EM activity as the root of consciousness because consciousness seems to demand something dynamic and "electrical".

The firing of a neuron is just a depolarisation spike that goes somewhere. So it is the patterns being spun that creates an organised state of representation. You don't need any special material or energetic property associated with the spike. It is the web of connections that is dynamical.

It is a bit like the molecules that make a wave. The wave crashes on the shore. The molecules just bob up and down where they are. Consciousness would equate to the wave and spiking neurons to the bobbing molecules - as a rough analogy.

Another simplified way of describing it is that the brain wraps itself around the shape of things. So rather than the analogy of a screen display that must be viewed, you have a sculpting that is the experience being lived.
 
  • #22
zoobyshoe said:
This all makes sense and is plausible, but it doesn't particularly explain why we would become conscious of the difference between prediction and data stream, or why that difference would become conscious of itself. How is that difference manifested in neuronal activity? Suppose you can tell me exactly: the predicting neurons do this and the data neurons do that, and there's this difference between the two. How does that difference become a conscious experience while the other activity remains unconscious?

This is exactly the point. We are conscious of our own internal model of the external world, not the incoming data stream or the error signal. Sometimes this model is wrong, and abruptly changed (e.g. multistable perception). Consciousness starts at the top of the heirarchy and constantly updates itself by propagation down predictions and back up errors which it uses to update its internal model.

Of course I can't explain why any system would actually become conscious. It just seems that one with an internal model of reality should be more aware of reality than one which functions as a relay between input and output.
 
  • #23
From the snippets Pythagorean posted in his OP, there doesn't seem to be a requirement for any "internal model" in this task, is there? If so, then the definition of "visual awareness" is probably different from a definition of "consciousness" that requires it.
 
  • #24
madness said:
This is exactly the point. We are conscious of our own internal model of the external world, not the incoming data stream or the error signal. Sometimes this model is wrong, and abruptly changed (e.g. multistable perception). Consciousness starts at the top of the heirarchy and constantly updates itself by propagation down predictions and back up errors which it uses to update its internal model.

Of course I can't explain why any system would actually become conscious. It just seems that one with an internal model of reality should be more aware of reality than one which functions as a relay between input and output.
Let me tackle it from a different direction: you pointed out in the other thread that electrical stimulation of an area of the parietal lobes causes the experience of phosphenes. In addition to that, transcranial magnetic stimulation of the primary visual cortex causes phosphenes. They performed this test on both normal people and migraineurs and determined that it takes much less stimulation to cause phosphenes in the latter: they have touchier neurons, so to speak.

At any rate, these tests, plus the fact of abstract visual auras in cortical spreading depression, just about prove that the firing of neurons causes elemental experiences. A person having a visual migraine aura can literally see the wave of hyperexited neuronal firing slowly creeping across their visual field, leaving the visual "hole" of depressed neurons in its wake. This visual experience has nothing to do with an external data stream or the difference between an internal model and a data stream. It is the raw experience of neurons firing. The light sensitive neurons firing in the absence of light non-the-less create the experience of light.

Any internal model of an external visual scene we are looking at must, therefore, be composed of light sensitive neurons whose firing is organized according to an external data stream. We're not conscious of the data stream, but of the firing neurons that create the experience of light in response to the data stream.

With other senses the conscious experience, the qualia, are different: a tactile neuron firing creates a little "atom" or "pixel" or "quale" of the sensation of skin pressure. An olfactory neuron: the sensation of smell. An auditory neuron: the sensation of sound. According to the Brodmann map.

You get the same hyper-activation of raw sensory experiences in simple partial seizures. The hypersynchronous firing of a population of sensory neurons will create an intense experience of a particular sense: the flashing of the visual field as if the person has a police flasher trained on them, a sudden roaring or loud buzzing sound, a incredibly intense, unpleasant smell. There are even proprioceptive simple partial seizures where a person is attacked by the uncanny sensation a limb is in a different position than it actually is: one man felt his arm was raised over his head despite the fact he could see it at his side, and another person, whose foot was quite relaxed, felt her toes were curled up underneath. The sensory neurons were firing independently of any external data. The firing of the sensory neurons creates the sensory experience. When the firing is hypersynchronous the sensory experience is overwhelmingly intense, and when the firing is erratic and chaotic so is the sensory experience.

I have to conclude conscious experience is caused by the firing of neurons, that the firing neuron, itself, is what is conscious of the experience it is dedicated to experience. However you make the neuron fire, whether it be in response to objective external data, or by the induction avalanche of a seizure or migrainous wave, the experience happens.

The things you and Apeiron speak about are also happening and necessary, but they are higher functions: integration, memory, planning, thinking. They are vastly more complex. The thing I'm addressing is the primal, unprocessed sensory experience, not a sense of self as an autonomous being, or anything like that.
 
  • #25
atyy said:
From the snippets Pythagorean posted in his OP, there doesn't seem to be a requirement for any "internal model" in this task, is there? If so, then the definition of "visual awareness" is probably different from a definition of "consciousness" that requires it.

I don't see the function of the anticipatory firing as being to create consciousness. It's just
practical to predict that things will obey Newton I. Give you an edge in case you have to react.
 
  • #26
atyy said:
From the snippets Pythagorean posted in his OP, there doesn't seem to be a requirement for any "internal model" in this task, is there? If so, then the definition of "visual awareness" is probably different from a definition of "consciousness" that requires it.

I don't think there's really a way to evaluate, test, or quantify "consciousness". It would be a lot like talking about "electromagnetism". You can't really quantify that word, as it's an over-arching concept. You can quantify specific quantities and properties within electromagnetism (charge, skin depth, field potential) but the concept "electromagnetism" is a huge grab-bag of intuitively associated concepts.

"Consciousness" has the same problem. We can talk about and even quantify specific aspects of it, but saying anything about all consciousness is not very rigorous; save it for the discussion and introduction section of a paper. It's a huge patchwork of ideas, observations, and personal connotation.
 
  • #27
Pythagorean said:
It's a huge patchwork of ideas, observations, and personal connotation.
Which is why it's such a great conversation/debate generator.
 
  • #28
atyy said:
From the snippets Pythagorean posted in his OP, there doesn't seem to be a requirement for any "internal model" in this task, is there? If so, then the definition of "visual awareness" is probably different from a definition of "consciousness" that requires it.

fMRI studies of these tasks show that perceptual switches are instigated by activity in the prefrontal cortex, indicating that a top down influence does cause the reordering of activity in the visual cortex as expected from predictive coding models. It has been mostly disproven that perceptual bistability arises from bottom-up rivalry in the early stages of visual processing alone.

The fact that there is a percept which varies independently of the stimulus to me implies that there must be an internal model. The only question is whether is constructed in sequential processing stages from the sensory input or whether some form of predictive coding is involved.
 
  • #29
zoobyshoe said:
The things you and Apeiron speak about are also happening and necessary, but they are higher functions: integration, memory, planning, thinking. They are vastly more complex. The thing I'm addressing is the primal, unprocessed sensory experience, not a sense of self as an autonomous being, or anything like that.

But your "visual neuron" will be getting more top-down inputs from higher levels than it gets coming up as its "data stream".

There just is no such thing as primary, unprocessed experience - even with phosphenes or form constants. You know from the literature that people have to learn to see them, to attend to them, to be able to predict their presence and so find them.
 
  • #30
There's also regulatory functions on the metabolic level that will drive, regulate, and modulate regions of networks in a more global manner, based on environmental cues, internal molecular clocks, toxic side-effects, drug use, genetic expression, etc.

For instance, I've posted the research on fabp7 and psd-90 trafficking during night-time shift, indicating a major overhaul of synaptic connectivity in the hippocampus. These kind of influences that cause global modulation in a region or whole nucleus are bound to change the spectral intensity of perceptive input.
 
  • #31
apeiron said:
You know from the literature that people have to learn to see them, to attend to them, to be able to predict their presence and so find them.
I don't know what you mean by this.

For example, a couple years ago Evo posted reporting the startling experience of a visual migraine aura. She had never experienced this before, or heard of it, and had no idea what was happening.

When and where did she "learn" to predict the presence of it and look for it?
 
  • #32
apeiron said:
But your "visual neuron" will be getting more top-down inputs from higher levels than it gets coming up as its "data stream".
Under normal circumstances. In the case of seizure and migraine or direct electrical or magnetic stimulation, I have to suppose all normal inputs from above or below are rendered moot, the neuron isn't firing in response to them. According to what I've read, during seizures the neurons are setting each other off by induction, not through synaptic connections.
 
  • #33
zoobyshoe said:
I don't know what you mean by this.

For example, a couple years ago Evo posted reporting the startling experience of a visual migraine aura. She had never experienced this before, or heard of it, and had no idea what was happening.

When and where did she "learn" to predict the presence of it and look for it?

And I just happened to have my first ever visual migraine just a few weeks back. It took me a little while to figure out it was there. I was thinking one eye seemed a little tired. The TV picture was not quite right. Then I eventually realized I was "seeing" a wriggling curve of lights - which for about half an hour I had just been trying to look around and ignore.

As soon as I thought, "that could be a visual migraine", that is when I could really focus on it and experience it as an object in itself. I could relate it to pictures that people had tried to draw of them.

So even with such a startling kind of "bottom-up, data driven" phenomenon, I found that top-down attention and expectation had to come into play for the sensation to be consciously reportable.
 
  • #34
zoobyshoe said:
Under normal circumstances. In the case of seizure and migraine or direct electrical or magnetic stimulation, I have to suppose all normal inputs from above or below are rendered moot, the neuron isn't firing in response to them. According to what I've read, during seizures the neurons are setting each other off by induction, not through synaptic connections.

If a seizure does indeed render the normal organised connectedness of the brain moot, then the result is...unconsciousness.
 
  • #35
madness said:
fMRI studies of these tasks show that perceptual switches are instigated by activity in the prefrontal cortex, indicating that a top down influence does cause the reordering of activity in the visual cortex as expected from predictive coding models. It has been mostly disproven that perceptual bistability arises from bottom-up rivalry in the early stages of visual processing alone.

The fact that there is a percept which varies independently of the stimulus to me implies that there must be an internal model. The only question is whether is constructed in sequential processing stages from the sensory input or whether some form of predictive coding is involved.

In the paper of the OP, isn't there a unique percept to each stimulus, although maybe not a unique stimulus to each percept? So it seems to be simply a many-to-one mapping. Why is an internal model required?

Even in the case where there are "top-down" influences, couldn't that just be apparent stochasticity due to long-transients from previous "bottom up" stimuli? Again, no "internal model" seems to be needed.

Example of long transients
http://arxiv.org/abs/cond-mat/0603154
http://arxiv.org/abs/0705.3214
 
Last edited:
  • #36
apeiron said:
If a seizure does indeed render the normal organised connectedness of the brain moot, then the result is...unconsciousness.
Obviously not. In a simple partial seizure we know the neurons are not communicating in the normal way, and yet consciousness is preserved.

Somatosensory-primary sensory cortex seizures usually elicit positive or negative sensations contralateral to the discharge.[3] Symptoms associated with seizures from the postcentral gyrus include the following:

Tingling
Numbness
Pain
Heat
Cold
Agnosia
Phantom sensations
Sensations of movement
Abdominal pain usually originates from the temporal lobe, and genital pain from the mesial parietal sensory cortex. The posterior parietal cortex is the likely origin of limb agnosia.

Supplemental sensory-secondary sensory cortex seizures may have ipsilateral or bilateral positive or negative sensations or vague axial or diffuse sensations.

Visual-calcarine cortex discharges produce elemental hallucinations including scintillations, scotomata, colored lights, visual field deficits, or field inversion. The visual association cortex is the probable location of origin of complex visual hallucinations and photopsias.

Auditory SPS from the auditory cortex typically are perceived as simple sounds, rather than words or music. Olfactory-uncinate seizures originate from the orbitofrontal cortex and the mesial temporal area. Perceived odors are usually unpleasant, often with a burning quality.

Gustatory seizures usually are associated with temporal lobe origin, although the insula and parietal operculum also have been implicated. Perceived tastes are typically unpleasant, often with a metallic component.

Vestibular seizures originate from various areas, including frontal and temporal-parietal-occipital junction. Symptoms include vertigo, a tilting sensation, and vague dizziness.

Psychic SPS arise predominantly from the temporal and limbic region, including the amygdala, hippocampus, and parahippocampal gyrus. Perceptual hallucinations or illusions are usually complex, visual or auditory, and are rarely bimodal.

Déjà vu and jamais vu phenomena may occur. Fear is usual, but SPS can elicit happiness, sexual arousal, anger, and similar responses. Cognitive responses include feelings of depersonalization, unreality, forced thinking, or feelings that may defy description.

None of these sensations is produced in the ordinary way. They result from the populations of neurons in the given particular area firing wildly in the absence of a legitimate stimulus. The sensations produced are illusions, hallucinations as it were.

http://emedicine.medscape.com/article/1184384-clinical
 
  • #37
zoobyshoe said:
Obviously not. In a simple partial seizure we know the neurons are not communicating in the normal way, and yet consciousness is preserved.

In what way are the neurons communicating if not the usual way? You mentioned something about "EM" as if you think they somehow bypass action potentials and synapses completely.

See - http://www.wisegeek.com/what-is-the-pathophysiology-of-seizures.htm

There is certainly a failure of normal activity. But it merely confirms the point that neurons exist within a network of excitatory and inhibitory influences.

Disrupt that feedback-based balancing act and a part of the brain can generate vivid imagery - suppressed states can become expressed states. And the rest of the brain can try to integrate this chaotic activity into the general web of its activity.

Consciousness is of course not preserve if the disruption becomes more generalised or hits areas key to organising the top-down orchestration of what should be coming bottom up.

Remember that pixels are different because they will glow the same regardless of what happens around them, but will glow only if they have a driving input.

Neurons do it the other way round. They are always spiking regardless of whether they have input (top-down or bottom-up). But they also always reflect what is happening around them in terms of firing rates, synchrony and as Pythagorean mentions, a whole host of sub-level processes as well.
 
  • #38
apeiron said:
You mentioned something about "EM" as if you think they somehow bypass action potentials and synapses completely.
It seems they do in some cases:

Increased synchrony between neurons due to ephaptic interactions
Electrical fields created by synchronous activation of pyramidal neurons in laminar structures, such as the hippocampus, may increase further the excitability of neighboring neurons by nonsynaptic (ie, ephaptic) interactions. Other possible nonsynaptic interactions include electrotonic interactions due to gap junctions or changes in extracellular ionic concentrations of potassium and calcium. Increased coupling of neurons due to permanent increases in the functional availability of gap junctions might be a mechanism that predisposes to seizures or status epilepticus.

What I meant by the usual connections being moot, as well, is that a seizing neuron obviously isn't going to be at liberty to respond to the usual sort of input. It's tied up seizing.

The thing to attack about my notion is not the claim the neuron isn't responding to input, but the unspoken assumption it's not generating output. It is, of course. The thalamus will get entrained into any cortical seizure. The thalamus could then communicate the experience to the frontal lobes where it becomes a conscious experience, if you're into the frontal lobes as the location of consciousness.
 
  • #39
apeiron said:
And I just happened to have my first ever visual migraine just a few weeks back. It took me a little while to figure out it was there. I was thinking one eye seemed a little tired. The TV picture was not quite right. Then I eventually realized I was "seeing" a wriggling curve of lights - which for about half an hour I had just been trying to look around and ignore.

As soon as I thought, "that could be a visual migraine", that is when I could really focus on it and experience it as an object in itself. I could relate it to pictures that people had tried to draw of them.

So even with such a startling kind of "bottom-up, data driven" phenomenon, I found that top-down attention and expectation had to come into play for the sensation to be consciously reportable.
Classic ocular migraines are unmistakeable and shock the heck out of you when you have one. It isn't something that is vague or hard to notice. They continue to rotate and grow until they disappear, usually in 15-20 minutes.

Picture below is most like the comon crescent with geometric patterns inside, it is very bright and pulsing and the designs inside undulate. Not something you can ignore or is hard to see.

attachment.php?attachmentid=26821&d=1278176109.jpg


Sounds like you might have had a minor visual disturbance.
 
  • #40
zoobyshoe said:
The thing to attack about my notion is not the claim the neuron isn't responding to input, but the unspoken assumption it's not generating output.

Who was saying that?

What was being stressed was the need to move away from a computational view of neuronal function - one where input produces output in some simplistic mechanical fashion.

Instead, as Madness says, you have a neuronal network that is predicting its inputs and adapting its state in the light of prediction errors, pattern matches, or other sources of surprisal. So any individual neuron is already in a state of meaningful output - it is already contributing fully to the global state of the brain - before it responds to any input.

So the spike train of a neuron is saying something before it changed its rate or timing, just as much as it is after the change. You could say that the neuron might go from saying something insignificant and ignored to something suddenly distinctive and attention-worthy.
But that is confirming the importance of those contextual judgements - it is the larger brain that is saying there is a difference between the two. The neuron itself is just switching rates. What does it know about anything?
 
  • #41
atyy said:
In the paper of the OP, isn't there a unique percept to each stimulus, although maybe not a unique stimulus to each percept? So it seems to be simply a many-to-one mapping. Why is an internal model required?

Even in the case where there are "top-down" influences, couldn't that just be apparent stochasticity due to long-transients from previous "bottom up" stimuli? Again, no "internal model" seems to be needed.

Example of long transients
http://arxiv.org/abs/cond-mat/0603154
http://arxiv.org/abs/0705.3214

I don't have access to the article from home, but as far as I understand from the abstract it is an example of binocular rivalry. There is only one binocular stimulus, but it is rivalrous between the two eyes and the brain suppresses the input from one eye.

To understand the sense in which the brain generates an internal model, read these: http://philosophyandpsychology.com/?p=1013
http://en.wikipedia.org/wiki/Bayesian_brain
http://mrl.isr.uc.pt/pub/bscw.cgi/d27540/ReviewKnillPouget2.pdf

The final link is a peer reviewed article by one of the top guys in this subfield.
 
Last edited by a moderator:
  • #42
apeiron said:
Who was saying that?

What was being stressed was the need to move away from a computational view of neuronal function - one where input produces output in some simplistic mechanical fashion.

Instead, as Madness says, you have a neuronal network that is predicting its inputs and adapting its state in the light of prediction errors, pattern matches, or other sources of surprisal. So any individual neuron is already in a state of meaningful output - it is already contributing fully to the global state of the brain - before it responds to any input.

So the spike train of a neuron is saying something before it changed its rate or timing, just as much as it is after the change. You could say that the neuron might go from saying something insignificant and ignored to something suddenly distinctive and attention-worthy.
But that is confirming the importance of those contextual judgements - it is the larger brain that is saying there is a difference between the two. The neuron itself is just switching rates. What does it know about anything?
Hey Apeiron, please link to the peer reviewed studies for this. Thanks.
 
  • #44
madness said:
I don't have access to the article from home, but as far as I understand from the abstract it is an example of binocular rivalry. There is only one binocular stimulus, but it is rivalrous between the two eyes and the brain suppresses the input from one eye.

To understand the sense in which the brain generates an internal model, read these: http://philosophyandpsychology.com/?p=1013
http://en.wikipedia.org/wiki/Bayesian_brain
http://mrl.isr.uc.pt/pub/bscw.cgi/d27540/ReviewKnillPouget2.pdf

The final link is a peer reviewed article by one of the top guys in this subfield.

I see, so by internal model you meant something different from a model of the self. It just means that there is "spontaneous" activity in the brain.
 
Last edited by a moderator:
  • #45
atyy said:
I see, so by internal model you meant something different from a model of the self. It just means that there is "spontaneous" activity in the brain.

In the context of perception it generally means a model of the external world. For self-awarenses and introspection there should also be a model of the self. I don't know why you think the activity would be spontaneous. Under bayesian theories of perception the activity encodes the probabilities of different stimulus configurations based on the incoming data. This activity is not spontaneous but is very orderly and stimulus-dependent. This is one reason why multistable perception is so useful experimentally - by maximising the ambiguity of the stimulus we can investigate perceptual inference and the neural mechanisms which cause us to settle on a unique interpretation.
 
  • #46
apeiron said:
The neuron itself is just switching rates. What does it know about anything?
The obvious counter to that is, "What does the "larger brain" know about anything?" As madness admitted, he has no idea how consciousness might arise from the Bayesian brain. How does the result of a calculation become aware of itself? The predictive modeling described is deemed to be a good description of what kind of calculation the brain is performing and nothing more. Consciousness isn't explained by it.

apeiron said:
Disrupt that feedback-based balancing act and a part of the brain can generate vivid imagery - suppressed states can become expressed states. And the rest of the brain can try to integrate this chaotic activity into the general web of its activity.
I believe it's much more literally an experience of, consciousness of, the cortex as such:

Though migraine causes great suffering for millions of people, there has been much success, in the last decade or two, in understanding what goes on during attacks, and how to prevent or minimize them. But we still have only a very primitive understanding of what, to my mind, are among the most intriguing phenomena of migraine — the geometric hallucinations it so often evokes. What we can say, in general terms, is that these hallucinations reflect the minute anatomical organization, the cytoarchitecture, of the primary visual cortex, including its columnar structure — and the ways in which the activity of millions of nerve cells organizes itself to produce complex and ever-changing patterns. We can actually see, through such hallucinations, something of the dynamics of a large population of living nerve cells and, in particular, the role of what mathematicians term deterministic chaos in allowing complex patterns of activity to emerge throughout the visual cortex. This activity operates at a basic cellular level, far beneath the level of personal experience.
http://migraine.blogs.nytimes.com/2008/02/13/patterns/

The same would be true of any dedicated area of the cortex that is seizing or experiencing migraine hyperexitability. The sounds, smells, tactile sensations, tastes that happen during these episodes manifest something elemental about that area of cortex. The cortex isn't modeling the outside world during these episodes, obviously, but it also isn't just producing incoherent noise: it's demonstrating something about itself. The firing of visual neurons in the visual cortex, firing produced by any means whatever, produces a visual experience. Visual neurons fire-->visual experience. Olfactory neurons fire-->olefactory experience. Etc. During these paroxysmal episodes these neurons aren't listening to any top-down or bottom up input. The thalamus get entrained into the paroxysm and starts helping to sustain it.

Whatever thing you experienced that you are citing as a migraine aura doesn't concur with what I've read and heard. I don't know what you experienced but the following is typical of the description I usually encounter:

I have had migraines for most of my life; the first attack I remember occurred when I was 3 or 4 years old. I was playing in the garden when a brilliant, shimmering light appeared to my left — dazzlingly bright, almost as bright as the sun. It expanded, becoming an enormous shimmering semicircle stretching from the ground to the sky, with sharp zigzagging borders and brilliant blue and orange colors. Then, behind the brightness, came a blindness, an emptiness in my field of vision, and soon I could see almost nothing on my left side. I was terrified — what was happening? My sight returned to normal in a few minutes, but these were the longest minutes I had ever experienced.
http://migraine.blogs.nytimes.com/2008/02/13/patterns/

Evo's was the same. This is not an experience that has to be learned to be attended to to become conscious of it. Attention to it is automatic and involuntary: it's an extreme experience.

It seems to me the Bayesian calculation is thrown into disarray during a paroxysm: both the reality and the model to compare it to become unavailable for the duration. Yet consciousness is preserved.
 
  • #47
madness said:
In the context of perception it generally means a model of the external world. For self-awarenses and introspection there should also be a model of the self. I don't know why you think the activity would be spontaneous. Under bayesian theories of perception the activity encodes the probabilities of different stimulus configurations based on the incoming data. This activity is not spontaneous but is very orderly and stimulus-dependent. This is one reason why multistable perception is so useful experimentally - by maximising the ambiguity of the stimulus we can investigate perceptual inference and the neural mechanisms which cause us to settle on a unique interpretation.
I didn't mean that spontaneous activity is unpatterned. Fundamentally, if we assume classical physics as a sufficient basis for neural function, there is of course no such thing as spontaneous activity - it depends on a choice of coarse grained variables, such that experimental preparations that are identical at a coarse scale are different on a fine scale. Anyway, the coarse scale is convenient, and described by a probability model. In Knill and Pouget's proposal, the spontaneous activity I am thinking about is similar in spirit to the Poisson noise they mention - but their proposal is more specific.
 
  • #48
vampares said:
I had an English class and this blonde girl with large jowls sat across from me. All I could do was stare.

And not paying attention, I was asked what is the cadence (or something) Shakespeare's sonnets are written in.

I gave a good look into the top of my head and answered iambic pentameter. We still haven't f'd. But I've alway wondered where the answer came from.

There are only 26 letters in the alphabet. nouns and verbs, nouns and verbs. Richard Dietrich.

Ultimately the world hits a rhythm. Dip-switch logic falls into play and people follow the politics.

Besides, animals can communicate with linguistic competency. They tend to follow the common language, as it is as free as you might possesses it.
You read a lot of James Ellroy?
 
  • #49
atyy said:
I didn't mean that spontaneous activity is unpatterned. Fundamentally, if we assume classical physics as a sufficient basis for neural function, there is of course no such thing as spontaneous activity - it depends on a choice of coarse grained variables, such that experimental preparations that are identical at a coarse scale are different on a fine scale. Anyway, the coarse scale is convenient, and described by a probability model. In Knill and Pouget's proposal, the spontaneous activity I am thinking about is similar in spirit to the Poisson noise they mention - but their proposal is more specific.

There is certainly a lot of noise in the nervous system - that is, variability in response to a stimulus etc., although there is debate over whether it really is noise. In general, however, spontaneous activity refers to activity which is not related to the onset of a stimulus or some other cognitive task (such as attention, remembering). See this article for an interesting example of spontaneous activity http://www.unicog.org/publications/PNAS-2008-Hesselmann.pdf.

The noise in response to a stimulus is generally considered to be unpatterned, although it can covary with other variables. For population coding models such as Pouget's, the neurons respond with some stimulus-specific firing rate (the tuning curve of the neuron) plus some noise. For a large population the noise can actually increase the information content of the network in some cases. However, it wouldn't exactly be to correct that the stimulus variables are represented by spontaneous activity alone.
 
  • #50
madness said:
There is certainly a lot of noise in the nervous system - that is, variability in response to a stimulus etc., although there is debate over whether it really is noise. In general, however, spontaneous activity refers to activity which is not related to the onset of a stimulus or some other cognitive task (such as attention, remembering). See this article for an interesting example of spontaneous activity http://www.unicog.org/publications/PNAS-2008-Hesselmann.pdf.

The noise in response to a stimulus is generally considered to be unpatterned, although it can covary with other variables. For population coding models such as Pouget's, the neurons respond with some stimulus-specific firing rate (the tuning curve of the neuron) plus some noise. For a large population the noise can actually increase the information content of the network in some cases. However, it wouldn't exactly be to correct that the stimulus variables are represented by spontaneous activity alone.

Thanks for the Hesselmann reference. It's terrific!

Do you have any more recommendations for reading work of similar quality about spontaneous activity? Just glancing at Hesselmann's references, it looks like (4) and (5) are in the same spirit.
 
Last edited:
Back
Top