Consciousness and the Attention Schema Theory - Meaningful?

In summary, the author argues that consciousness is formed at the intermediate level - the lower level is primary brute processing while the higher level is concerned with abstraction and categorization. He also suggests that attention is the mediating agent, and that awareness (his attentional model) arises in the superior temporal sulcus and the temporoparietal junction.
  • #1
Graeme M
325
31
I am currently reading Jesse Prinz's "The Conscious Brain". I am only about 75 pages into it so haven't yet covered enough ground to fully understand what he's proposing, but his idea is termed the Attended Intermediate-level Representation theory.

Prinz notes that evidence to date shows that most sensory processing is probably organised into a tripartite hierarchy - lower, intermediate and higher levels. He argues that consciousness is formed at the intermediate level - the lower level is primary brute processing while the higher level is concerned with abstraction and categorisation.

The intermediate level is where the more nuanced marriage of both produces the experiential phenomenon we term consciousness via the process of attention. Prinz further seems to be saying that all experiential phenomona arise from sensory processing, including emotional experiences which he sees as the intermediate evaluation of internal bodily responses.

Although I haven't gotten far into the book, I get the feeling that for all the strictly physical evidence he has assembled Prinz appears still to be arguing for the idea that a conscious experience somehow 'arises' from the neural processing of information. I may be wrong there as I haven't yet reached the exposition of his theory in detail.

By contrast I recently read the book "Consciousness and the Social Brain" by Professor Michael Graziano. He too argues for attention as the mediating agent, but he has a rather different tack. He suggests that the brain constructs a model of the internal process of generating attention, a description if you will of what is being attended. This model is awareness and it can then be attached to the objects of attention - in a qualitative sense, the subjective experience is simply the constructed model of perceptual object and attentive processing. As Graziano describes it, consciousness is "a schematic model of one's state of attention".

Graziano coins the term 'Attention Schema Theory" to his idea, and suggests that awareness (his attentional model) arises in the superior temporal sulcus and the temporoparietal junction.

A paper discussing his theory can be found here:
http://journal.frontiersin.org/article/10.3389/fpsyg.2015.00500/abstract

This latter theory strikes an intuitive chord for me. Consciousness is what it feels like for the brain to continuously construct a model of attention - a model that changes moment by moment and which correlates a range of perceptual data and unconscious processing into a directive process for managing the organism's behaviour.

Now, this is mostly beyond my pay grade, but it strikes me that here we have two somewhat complementary theories. If intermediate level representation is the what of consciousness and the attention schema is the how of consciousness, do these two theories therefore dovetail to some extent and point the way to an explanatory physical account of consciousness?

Or is Graziano off-beam?
 
Biology news on Phys.org
  • #2
Of course, this is all still in the realm of speculation. It is productive speculation, that can lead to falsifiable claims (obviously the details have evidence behind them, but how we synthesize those details into a testable claim on the larger scale is still being figured out).

That being said, I don't understand what is insightful about associating attention with consciousness. It's sensible, but what does it really tell us, physically, about how the subjective experience arises in the first place? How would it help us test a robot or invertibrate to see if it is conscious? Certainly we can observe behavior in a robot that looks like attention, but we still haven't really ascertained whether or not the robot is experiencing anything.
 
  • #3
Graziano explicitly claims to attack the "hard problem of consciousness", as defined by Chalmers, and then clearly goes on to propose a solution to some of Chalmers' so-called easy problems instead. His theory is interesting, but it's not what he claims it to be.

In terms of the lower, intermediate and higher levels of processing, current findings are really complicating that picture, to the point at which is becoming questionable. Most of the research on which this was based was on anaesthetised animals. From studies of awake behaving animals, it is becoming clear that even primary sensory cortices are often dominated by higher non-sensory processes.
 
  • #4
I suppose Prinz's explanation makes more sense in this case. Even the current supercomputer can only account for 1% of the brain's functionality. I would speculate that the brain's consciousness is generated from the inner connections of the neurons. While ignoring all sensory inputs your consciousness still exists and you can't cease to exist simply because you do not think of something. As to where consciousness arise, that would take calculations on the neuron firing and observation of the electrical and chemical synapses inside the body. If you ask me what brain stem is for, it would be for regulation such as heart beats and breathing. If you ask me how they figure this out, well, it might be from the observation of brain imaging. To be more precise though, it would be interesting to see how the brain stem's neuronal structure is able to regulate the heart and breaths. After all the brain is constructed of neurons and I speculate that the connections between the neurons is what make it a bio computer capable of regulating hearts beat and breaths. The structure of the brain should resemble that of another person even if they have different memories. I would speculate that certain structure and wiring of the brain should be the same but with slight variations. People are currently working on the whole simulation of the brain such as the Blue Brain project, but I am not sure of the accuracy of such simulation and if it would be used to study the idea of consciousness. As for professor Michael's speculation on awareness, it could be correct that the anatomical structure of "superior temporal sulcus and the temporoparietal junction" governs that of consciousness. But the only way to verify that is through simulation with a supercomputer or studying through brain imaging. Even if you can see the firing of neuronal signals, it would still take algorithms and ways to figure out its functional behavior. It is easier for a person to tell you what it feels like, you see the color green, you see how the neuronal signal fires on the retina, and you create an algorithm on such neuronal signals saying it is of color green. Just like you are looking at the computer without looking at the computer screen and trying to guess what it is doing. I am also interested in this topic. But there's still quite a lot of speculations going on. This is probably a good place to start http://www.gizmag.com/neural-3d-imaging-brain/32169/ .
 
  • #5
Lots of things contribute to breathing regulation. Network connectivity does play a role. There is thought to be an intrinsic rhythm generator (but it could also be that rhythm generation is a network level phenomena). Either way, there appears to be chemosensors that detect the level of CO2 (for instance) and upregulate breathing to correct hypercapnia. There's a lot of other similar inputs to the rhythm generator that regulate based on organism state.
 
  • #6
fredreload said:
While ignoring all sensory inputs your consciousness still exists and you can't cease to exist simply because you do not think of something.

Hmmm, I'm not sure that this is a testable hypothesis.
 
  • #7
madness, that's very interesting about current research. I haven't finished Prinz's book so I am not sure exactly how he categorises the levels of hierarchy but on the face of it I felt that the levels side of things was a bit sketchy in that most of this idea is based on findings regarding the visual processing system. He proposes that on evidence at the time of writing it seemed likely that the same processing arrangement was the case for other systems such as auditory, gustation, emotional response and so on.

That said, how do new findings negate the broad idea he has proposed - that consciousness arises at an intermediate stage in processing. I am speaking from relative ignorance but substantial curiosity here so might be misunderstanding, but in terms of a hierarchy of processing, isn't Prinz largely referring to a logical architecture rather than a physical architecture (although he does make claims regarding physical locations)? Isn't the intermediate representation theory then still valid whether or not the higher level processes also occur in primary cortices?

In terms of the hard problem, I don't mean to stray into philosophy but I am still somewhat uncertain exactly why it is hard. Chambers argues that awareness, or conscious experience, cannot easily be explained by physical processes. How does a physical process give rise to some non-physical thing like awareness. I don't see why awareness is considered non-physical. I guess I need someone to explain to me why this is so challenging.

For example, if visual cognition is relatively well explained, then surely a physical description of how a visual representation of the external object is formed by neuronal arrangements is sufficient? That is, if we know how a visual representation is formed and processed, then by extension the internal experience of that just is what it is to have that representation? If the neural arrangement for a given representation changes in response to some change in stimulus, and the subject reports a change in conscious experience, then what else need there be?

This leads to Graziano's hypothesis. He states that he uses the word attention in a neuroscientific sense. This is (as I understand it) that various internal signals are generated in response to stimulus and when a particular signal is strong enough it is attended to by further higher-order processes. So attention is a process the brain employs to select which of the many signals bubbling away it should actually use in directing the organism's behaviour. But attention is both a top-down and bottom-up process.

I understand Graziano to mean that awareness is a model of the process of attention. That is, it's an abstraction of the many emergent signals placed into a coherent whole. The object of this is to provide some degree of top-down determination of behaviour. It seems to me that if all the brain does is to respond to whatever signals gain priority without a cohesive unity then behaviour will be rigid and less likely to lead to an adaptive advantage.

The more that the organism can flexibly utilise the objects of attention and combine multiple potentials into a single directed stream of behaviour the more likely it should be that the organism can react proactively. That must confer a selective advantage, and thus evolution might favour the development of such processes.

That's what I mean by his hypothesis making intuitive sense to me. But I freely admit to only the most basic understanding of this stuff.

Pythagorean, it's not attention per se that is the distinguishing feature of Graziano's hypothesis. It is the model of attention that is critical. Prinz is arguing that attention mediates the available range of representations into a set for conscious experience. Graziano suggests that modelling that process presents the experience itself.

Consciousness is the abstracted representation of the attentional process which is itself then available for further processing. His thinking seems to be that this would evolve in order to permit assessment of attentional properties in other environmental agents, for example, what another person, or a predator, is attending to and how that might play out in behaviour.
 
  • #8
Pythagorean said:
Lots of things contribute to breathing regulation. Network connectivity does play a role. There is thought to be an intrinsic rhythm generator (but it could also be that rhythm generation is a network level phenomena). Either way, there appears to be chemosensors that detect the level of CO2 (for instance) and upregulate breathing to correct hypercapnia. There's a lot of other similar inputs to the rhythm generator that regulate based on organism state.
That's an interesting concept, a friend once mentioned that the neuronal structure resembles that of a transistor, maybe by understanding the individual neuronal functionality along with the delay on electrical and chemical synapse would give a better idea on how the brain works.
 
  • #9
Graeme M said:
madness, that's very interesting about current research. I haven't finished Prinz's book so I am not sure exactly how he categorises the levels of hierarchy but on the face of it I felt that the levels side of things was a bit sketchy in that most of this idea is based on findings regarding the visual processing system. He proposes that on evidence at the time of writing it seemed likely that the same processing arrangement was the case for other systems such as auditory, gustation, emotional response and so on.

That said, how do new findings negate the broad idea he has proposed - that consciousness arises at an intermediate stage in processing. I am speaking from relative ignorance but substantial curiosity here so might be misunderstanding, but in terms of a hierarchy of processing, isn't Prinz largely referring to a logical architecture rather than a physical architecture (although he does make claims regarding physical locations)? Isn't the intermediate representation theory then still valid whether or not the higher level processes also occur in primary cortices?

I have to admit I don't know Prinz's theory well. What I was referring to was a host of recent experimental results that muddy the traditional distinction between low-level sensory, intermediate-level association, and high level cognitive brain areas. During active behaviour, sensory areas convey information about a load of nonsensory factors such as reward (http://www.cell.com/neuron/abstract/S0896-6273(15)00476-6), mismatch between expected and observed sensory signal (http://www.ncbi.nlm.nih.gov/pubmed/22681686), the physical movement of the animal (http://www.ncbi.nlm.nih.gov/pubmed/20188652) etc.

If I understand you right, you now ask whether we can shift from an anatomically localised view of low, intermediate and high-level functions to a functional view. To me, the current evidence points to a picture in which processing isn't separated into these levels at all, but rather it is all happening together in the same circuits somehow.

Graeme M said:
In terms of the hard problem, I don't mean to stray into philosophy but I am still somewhat uncertain exactly why it is hard. Chambers argues that awareness, or conscious experience, cannot easily be explained by physical processes. How does a physical process give rise to some non-physical thing like awareness. I don't see why awareness is considered non-physical. I guess I need someone to explain to me why this is so challenging.

You're not alone in feeling that way. It's an opinion dividing issue. Chalmers puts his case forward here http://consc.net/papers/facing.pdf. A major problem in deciding whether consciousness is physical is first having a good definition of what it means for something to be physical or not, in my opinion at least.

Graeme M said:
For example, if visual cognition is relatively well explained, then surely a physical description of how a visual representation of the external object is formed by neuronal arrangements is sufficient? That is, if we know how a visual representation is formed and processed, then by extension the internal experience of that just is what it is to have that representation? If the neural arrangement for a given representation changes in response to some change in stimulus, and the subject reports a change in conscious experience, then what else need there be?

The hard problem asks, "why does it feel like something to have a visual representation?", and also "why does it feel like this to have this representation and not some other way?". Another way of phrasing it is that there is a "something it is like" https://en.wikipedia.org/wiki/What_Is_it_Like_to_Be_a_Bat?. You can have a look at the inverted spectrum (https://en.wikipedia.org/wiki/Inverted_spectrum) and p-zombie (https://en.wikipedia.org/wiki/Philosophical_zombie) thought experiments to get a better idea.

In any case, there are people who take a view somewhat like yours, and people who take the opposite view.
Graeme M said:
This leads to Graziano's hypothesis. He states that he uses the word attention in a neuroscientific sense. This is (as I understand it) that various internal signals are generated in response to stimulus and when a particular signal is strong enough it is attended to by further higher-order processes. So attention is a process the brain employs to select which of the many signals bubbling away it should actually use in directing the organism's behaviour. But attention is both a top-down and bottom-up process.

I understand Graziano to mean that awareness is a model of the process of attention. That is, it's an abstraction of the many emergent signals placed into a coherent whole. The object of this is to provide some degree of top-down determination of behaviour. It seems to me that if all the brain does is to respond to whatever signals gain priority without a cohesive unity then behaviour will be rigid and less likely to lead to an adaptive advantage.

My point was simply that Graziano addresses what Chalmers refers to as the easy problems, while explicitly stating that he is addressing what Chalmers calls the hard problem. Note that Chalmers defines "awareness" as an easy problem not a hard problem. Here are the easy problems outlined by Chalmers in the paper I linked to above:

• the ability to discriminate, categorize, and react to environmental stimuli; • the integration of information by a cognitive system; • the reportability of mental states; • the ability of a system to access its own internal states; • the focus of attention; • the deliberate control of behavior; • the difference between wakefulness and sleep.

In particular, the ability to access and report internal states and focus attention are what Graziano attempts to address.
 
  • #10
madness said:
In terms of the lower, intermediate and higher levels of processing, current findings are really complicating that picture, to the point at which is becoming questionable. Most of the research on which this was based was on anaesthetised animals. From studies of awake behaving animals, it is becoming clear that even primary sensory cortices are often dominated by higher non-sensory processes.

madness said:
What I was referring to was a host of recent experimental results that muddy the traditional distinction between low-level sensory, intermediate-level association, and high level cognitive brain areas. During active behaviour, sensory areas convey information about a load of nonsensory factors such as reward (http://www.cell.com/neuron/abstract/S0896-6273(15)00476-6), mismatch between expected and observed sensory signal (http://www.ncbi.nlm.nih.gov/pubmed/22681686), the physical movement of the animal (http://www.ncbi.nlm.nih.gov/pubmed/20188652) etc.

There is a long history, including work done in anesthetized animals, that is well aware of the effects of "attention" and "reward" in the primary sensory cortices. For example, http://www.ncbi.nlm.nih.gov/pubmed/8855336 and http://www.ncbi.nlm.nih.gov/pubmed/16672673. Thus these indications are neither only recent and nor only indicated by work in awake behaving animals.

With respect to the effect of movement or other non-sensory factors in the sensory cortices, there are also earlier results like http://www.ncbi.nlm.nih.gov/pubmed/12495520 and http://www.ncbi.nlm.nih.gov/pubmed/12612021 and http://www.ncbi.nlm.nih.gov/pubmed/16033889 and http://www.ncbi.nlm.nih.gov/pubmed/14583754.

Although any "traditional distinction" was muddied long ago, there is still a rough cortical hierarchy, borne out by old and recent results such as http://www.ncbi.nlm.nih.gov/pubmed/25383900.
 
Last edited:
  • #11
One may imagine a computer's being fitted with optical sensing equipment, and one may imagine the computer's "attention" 's being focused on the optical information it is processing--and in one sense it would make sense to say that the computer need not be conscious, while in a second sense it would make no sense at all to say that the computer need not be conscious. One may mean only that the computer's information-processing is prioritizing the processing of optical information over the processing of other information, in which case the computer's having its "attention" focused would not imply its being conscious. Or one may mean that the computer's awareness is focused on what it sees, in which case the computer's having its "attention" focused would imply its being conscious. It seems to me that attempts to explain consciousness always address the first meaning---what Chalmers calls "the easy problem(s) of consciousness," just as madman said--but that they never address the second meaning--what Chalmers calls "the hard problem of consciousness." This is not a shortcoming of those attempts, taken as attempts to say as much as can scientifically be said about how consciousness arises from brain function in terms of objective observables; but those who make such attempts should not be taken to be addressing the hard problem of consciousness, and they should not advertise themselves as so doing.

I do find it fascinating that there are some people who do not seem to see Chalmers's hard problem as a problem at all. It's sometimes tempting to suppose that some people are mindless robots (and consequently do not understand the hard problem, as they lack introspective capability) but that others are enminded (and consequently do understand the hard problem). I'll assume that we're all enminded, though <smile>. But I should think that the contrast between a rock, which presumably has no awareness of anything at all, and an awake human being, who has awareness even when he sits still, in a silent room, with his eyes closed, ought to be self-evident. Understanding how the brain processes information is one thing; understanding how it is that that processing of information doesn't take place mindlessly, with no awareness or mental states whatsoever, but instead takes place in such a way that, for example, pain matters, is another matter. How is it that human beings are not merely mindless robots mindlessly processing information--perhaps prioritizing some information over other information, but still processing it all mindlessly?
 
  • #12
MindWalk said:
I do find it fascinating that there are some people who do not seem to see Chalmers's hard problem as a problem at all. It's sometimes tempting to suppose that some people are mindless robots (and consequently do not understand the hard problem, as they lack introspective capability) but that others are enminded (and consequently do understand the hard problem). I'll assume that we're all enminded, though <smile>. But I should think that the contrast between a rock, which presumably has no awareness of anything at all, and an awake human being, who has awareness even when he sits still, in a silent room, with his eyes closed, ought to be self-evident. Understanding how the brain processes information is one thing; understanding how it is that that processing of information doesn't take place mindlessly, with no awareness or mental states whatsoever, but instead takes place in such a way that, for example, pain matters, is another matter. How is it that human beings are not merely mindless robots mindlessly processing information--perhaps prioritizing some information over other information, but still processing it all mindlessly?

I often am uncertain whether I am conscious or "enminded", a nice term I have just learnt. Could consciousness be just a fancy form of object recognition, ie. the recognition of an object called "self" (and maybe "world")?

I admit that the times I feel conscious are when I think about the "hard problem" :oldgrumpy:
 
  • #13
atyy said:
I often am uncertain whether I am conscious or "enminded", a nice term I have just learnt. Could consciousness be just a fancy form of object recognition, ie. the recognition of an object called "self" (and maybe "world")?

Isn't that just what Graziano is suggesting? That awareness is a model of attentional process - in effect, awareness of experience is a property that the brain attaches to the objects of perception.

Mindwalk, when Graziano (and maybe Prinz too but I haven't read far enough yet) talk of attention, they seem to me to be talking of something other than what we colloquially mean by attention. If I am misunderstanding please correct me here, but Graziano at least doesn't mean by attention the idea that we focus on a particular thing at a subjective level. He is talking about a neural process by which signals become attended to. I thought that was an accepted model for how neural processing unfolds - that when a particular signal rises above the background noise of random and perceptual signals it becomes available for further processing. I assume that's sort of what Schurger is referring to in his explanation of the Libet data ( http://www.pnas.org/content/109/42/E2904.full )

I am one of those who doesn't follow why the hard problem is seen as hard, but I'll have to read the reference above from madness in which Chambers idea is outlined to see if I can get a better feel for this subject.

By the way, for anyone interested in Graziano's theory who doesn't wish to read the formal papers, here's a nice summary:Edit by moderator, deleted inappropriate source.
 
Last edited by a moderator:
  • #14
MindWalk said:
One may imagine a computer's being fitted with optical sensing equipment, and one may imagine the computer's "attention" 's being focused on the optical information it is processing--and in one sense it would make sense to say that the computer need not be conscious, while in a second sense it would make no sense at all to say that the computer need not be conscious. One may mean only that the computer's information-processing is prioritizing the processing of optical information over the processing of other information, in which case the computer's having its "attention" focused would not imply its being conscious. Or one may mean that the computer's awareness is focused on what it sees, in which case the computer's having its "attention" focused would imply its being conscious. It seems to me that attempts to explain consciousness always address the first meaning---what Chalmers calls "the easy problem(s) of consciousness," just as madman said--but that they never address the second meaning--what Chalmers calls "the hard problem of consciousness." This is not a shortcoming of those attempts, taken as attempts to say as much as can scientifically be said about how consciousness arises from brain function in terms of objective observables; but those who make such attempts should not be taken to be addressing the hard problem of consciousness, and they should not advertise themselves as so doing.

I do find it fascinating that there are some people who do not seem to see Chalmers's hard problem as a problem at all. It's sometimes tempting to suppose that some people are mindless robots (and consequently do not understand the hard problem, as they lack introspective capability) but that others are enminded (and consequently do understand the hard problem). I'll assume that we're all enminded, though <smile>. But I should think that the contrast between a rock, which presumably has no awareness of anything at all, and an awake human being, who has awareness even when he sits still, in a silent room, with his eyes closed, ought to be self-evident. Understanding how the brain processes information is one thing; understanding how it is that that processing of information doesn't take place mindlessly, with no awareness or mental states whatsoever, but instead takes place in such a way that, for example, pain matters, is another matter. How is it that human beings are not merely mindless robots mindlessly processing information--perhaps prioritizing some information over other information, but still processing it all mindlessly?
If we are going to continue this thread we need to get away from philosophy and stick to the science. Thanks. Remember in order to be acceptable, the information must be published in an acceptable peer reviewed journal.
 
Last edited:
  • #15
Evo said:
If we are going to continue this thread we need to get away from philosophy and stick to the science. Thanks. Remember in order to be acceptable, the information must be published in an acceptable peer reviewed journal.

The point of the hard problem was mentioned by madness in post #3. Do you think madness was going off topic there?
 
  • #16
Graeme M said:
Isn't that just what Graziano is suggesting? That awareness is a model of attentional process - in effect, awareness of experience is a property that the brain attaches to the objects of perception.

On a quick first read, Graziano's proposal does seem to be along those lines. There is some history to this sort if thing. You can try googling "efference copy and consciousness". Efference copy is a kind of internal feedback in models of motor control, and a system that uses efference copy often has something which can very loosely be thought of as a "model of the self", eg. the book by Churchland referred to by http://letstalkbooksandpolitics.blogspot.sg/2014/02/the-self-as-brain-efferent-copy-voices.html or Owen Holland's presentation http://slideplayer.com/slide/793890/.

But are these enough, or do they miss the point of the "hard problem"?
 
Last edited:
  • #17
Imo, falsifiability and the hard problem lies in designing an experiment that can test whether certain systems are conscious (such as insects, amphibians, or robots). So the focus now is designing the experiment (if such an experiment can be designed). Tonini's theory has has some success with humans, but it lacks any external validity. Other than that, we are collecting data on the easy problem as it relates to the hard problem to get a more complete picture.

So we have Tonini's Integrated Information Theory, Varela's Brainweb, Friston's Free Energy Principle, Koch and Crick's framework, and many more that are discussed here. But the thing they all have in common is their relation to the hard problem, and it often requires careful epistemology to describe the limitations of each approach in the context of the hard problem.
 
Last edited:
  • #18
Pythagorean said:
Imo, falsifiability and the hard problem lies in designing an experiment that can test whether certain systems are conscious (such as insects, amphibians, or robots).

So the focus now is designing the experiment (if such an experiment can be designed).

Maybe it's like an experiment to distinguish between liquid and gas :)

Pythagorean said:
Tonini's theory has has some success with humans, but it lacks any external validity..

Have you seen http://www.scottaaronson.com/blog/?p=1799 and http://www.scottaaronson.com/blog/?p=1823 ?
 
  • Like
Likes mattt and Pythagorean
  • #19
No, I hadn't seen that. I've made it through the second one now, will have to go back for the first sometime, lots of information in there. There's a lot of general ideas and points in there that I agree with (even when the author is not talking about Tonini's work).
 
  • #20
atyy said:
There is a long history, including work done in anesthetized animals, that is well aware of the effects of "attention" and "reward" in the primary sensory cortices. For example, http://www.ncbi.nlm.nih.gov/pubmed/8855336 and http://www.ncbi.nlm.nih.gov/pubmed/16672673. Thus these indications are neither only recent and nor only indicated by work in awake behaving animals.

With respect to the effect of movement or other non-sensory factors in the sensory cortices, there are also earlier results like http://www.ncbi.nlm.nih.gov/pubmed/12495520 and http://www.ncbi.nlm.nih.gov/pubmed/12612021 and http://www.ncbi.nlm.nih.gov/pubmed/16033889 and http://www.ncbi.nlm.nih.gov/pubmed/14583754.

Although any "traditional distinction" was muddied long ago, there is still a rough cortical hierarchy, borne out by old and recent results such as http://www.ncbi.nlm.nih.gov/pubmed/25383900.

I more or less agree. I was careful not to make any excessively strong statements about this. In any case, there has really been an acceleration of research in this direction very recently due to the ability to record in head-fixed mice in virtual reality environments.

The processing hierarchy is really not clear in my opinion. There are 10 times more projections from cortex to thalamus than thalamus to cortex, for example (http://www.ncbi.nlm.nih.gov/pubmed/12626002). In terms of information flow, there is a growing evidence for the predictive coding hypothesis (http://www.ncbi.nlm.nih.gov/pubmed/10195184), which turns the traditional processing hierarchy on its head. This recent paper found that the activity in V1 is dominated by top-down inputs in a well-learned visual task, but bottom-up inputs in an unfamiliar task (http://www.nature.com/neuro/journal/v18/n8/abs/nn.4061.html).
 
  • #21
madness said:
The processing hierarchy is really not clear in my opinion. There are 10 times more projections from cortex to thalamus than thalamus to cortex, for example (http://www.ncbi.nlm.nih.gov/pubmed/12626002). In terms of information flow, there is a growing evidence for the predictive coding hypothesis (http://www.ncbi.nlm.nih.gov/pubmed/10195184), which turns the traditional processing hierarchy on its head.

It depends on what one means by "hierarchy". It's quite possibly an abuse of the term, but "hierarchy" is often used in a way that includes consideration of the feedback connections, eg. http://www.ncbi.nlm.nih.gov/pubmed/9373019. Another example is the predictive coding paper by Rao and Ballard http://www.ncbi.nlm.nih.gov/pubmed/10195184 that you cite, which uses the term "hierarchy" to describe its idea. Ballard's new book https://www.amazon.com/dp/0262028611/?tag=pfamazon01-20 is "Brain Computation as Hierarchical Abstraction" :)

Incidentally, the question of what the feedback connections are doing is also a problem in areas where the hierarchical idea is accepted with no dispute. http://www.ncbi.nlm.nih.gov/pubmed/25994703 ! Attentional effects in the cochlea? What?!

It's somewhat out of date now, but in artificial neural networks that are pretrained as deep belief networks, the system is hierarchical, and the feedback connections are needed in the pretraining stage, but when the network is finally trained, it is run in a feedforward way. So an interesting idea is that the feedback connections play a greater role during learning, and a lesser role after learning. Although the learning dynamics are presumably different from those of their artificial counterparts, there is data indicating that this is also the case in the central auditory system: http://www.ncbi.nlm.nih.gov/pubmed/20037578.


madness said:
This recent paper found that the activity in V1 is dominated by top-down inputs in a well-learned visual task, but bottom-up inputs in an unfamiliar task (http://www.nature.com/neuro/journal/v18/n8/abs/nn.4061.html).

Thanks, I hadn't seen that.
 
Last edited:
  • #22
atyy said:
It depends on what one means by "hierarchy". It's quite possibly an abuse of the term, but "hierarchy" is often used in a way that includes consideration of the feedback connections, eg. http://www.ncbi.nlm.nih.gov/pubmed/9373019. Another example is the predictive coding paper by Rao and Ballard http://www.ncbi.nlm.nih.gov/pubmed/10195184 that you cite, which uses the term "hierarchy" to describe its idea. Ballard's new book https://www.amazon.com/dp/0262028611/?tag=pfamazon01-20 is "Brain Computation as Hierarchical Abstraction" :)


I think the fact that there is a structural heirarchy, and that there is a functional hierarchy when stimulating anaesthetised animals, shows that some hierarchy does exist in the brain. The nature of this hierarchy in terms of information processing during awake behaviour is entirely unclear, however.

atyy said:
It's somewhat out of date now, but in artificial neural networks that are pretrained as deep belief networks, the system is hierarchical, and the feedback connections are needed in the pretraining stage, but when the network is finally trained, it is run in a feedforward way. So an interesting idea is that the feedback connections play a greater role during learning, and a lesser role after learning. Although the learning dynamics are presumably different from those of their artificial counterparts, there is data indicating that this is also the case in the central auditory system: http://www.ncbi.nlm.nih.gov/pubmed/20037578.

Isn't that the opposite of what the recent paper I linked to found? They argued that top-down connections provide information from an internal model. In naive animal's, they found that V1 responses were dominated by bottom-up sensory input and in well-trained animals the responses were dominated by top-down signals reflecting an internal model. In other words, naive animals use a traditional bottom-up processing hierarchy, but after learning animals use a predictive coding scheme based on a top-down processing hierarchy.
 
  • #23
Thanks for the many references, I have much reading to do.

An issue here for me is the definition of consciousness in a scientific sense. Some of the discussion so far talks of functions and processes which are not necessarily pointing us at any definition of consciousness, or so it seems to me. But again all I have to go on is my own thinking about this and little actual knowledge.

Let me explain what I am thinking and if anyone can correct me where needed that would help a lot.

If by consciousness we mean awareness as colloquially understood, then the how of cognitive function is probably sufficient. But if by consciousness we mean some kind of marriage of awareness, prior experience and directive actions, the functional description becomes a contributor to an explanation rather than the explanation.

I mean by this that a novel experience should require a different kind of internal process to a learned experience - I must carefully and 'consciously' attend behaviour when learning but once a behaviour is habitualised often the only conscious act is initiating the behaviour. That seems reflective of the kinds of observations madness's references point out. In other words, awareness is sufficient for habits or 'unconscious' behaviours (in a monitoring sense I mean) whereas consciousness is needed to deal with new situations or to learn new behaviours.

From what I gather of that paper linked to earlier, Chambers in talking of the hard and easy problems seems to suggest that both awareness and conscious direction in my points above are the easy problems. The experience itself is what is the hard problem.

Scott Aaronson in the blog linked above touches on this when he notes there is no agreed "independent notion of consciousness against which the new notion can be compared". He offers what he calls paradigm-cases that point towards what we mean by consciousness but I think these are rather lacking in substance. Nonetheless he highlights the problem of knowing what it is we are trying to uncover.

Prinz talks of an 'intermediate level' of representation, but is he strictly talking of a functional arrangement? I can see from discussion here that there has been a traditional physical paradigm that suggests a hierarchy, but is Prinz's idea invalidated by new findings that blur those traditional ideas?

For example, the bottom-up or top-down processing in the paper madness references point to a physical implementation, but what is the logical model of the system behaviour?

Graziano, while he does talk of physical locations, suggests consciousness is that which it feels like to model these functional processes and then attach that model to the objects of attention. That would suggest some kind of separate but standard kind of process that could be observed in all functional activities wouldn't it?

By that I mean that while we have bottom-up or top-down or whatever processing of signals in congnitive function, there should be some other 'standard' operation elsewhere (even if 'elsewhere' is widely distributed) that attends such functional processing if Graziano is right. The evidence for his theory would be a separate 'model construction' process. He offers a location so wouldn't a test for that be to observe a process or neural arrangement that arises in those locations synchronous to other functional processes?

Regardless, if actually experiencing the world is the hard problem, then I agree that this theory doesn't seem to explain the hard problem. Even if there is a model of attention attached to other representations, how does that tell us what it is to have an internal sense of that model?

My own personal intuition, and I see that it is a naïve one, is that when whatever happens internally happens, it just feels like that. There isn't anything to explain. Consider gravity or magnetism. We can explain the forces and make predictions of behaviours, but do we have any explanation of what they feel like? What does it feel like to be attracted to the surface of the earth? We know there is such a feeling but we don't need a physical explanation of what the feeling is in order to have a perfectly workable theory of gravity or electromagnetic forces.
 
  • #24
madness said:
Isn't that the opposite of what the recent paper I linked to found? They argued that top-down connections provide information from an internal model. In naive animal's, they found that V1 responses were dominated by bottom-up sensory input and in well-trained animals the responses were dominated by top-down signals reflecting an internal model. In other words, naive animals use a traditional bottom-up processing hierarchy, but after learning animals use a predictive coding scheme based on a top-down processing hierarchy.

Yes, it seems the opposite. I'm pretty sure there are top-down phenomena similar to what you've been talking about. This comes from a consideration of the hierarchy. Even before the recent mouse work, it was uncontested that there are attentional (task-dependent) effects in MT and V4, and some evidence that attentional effects occur even in V1. Task-dependence is simply a sort of sensory processing, since the task must be indicated to the animal by a sensory stimulus at the start of the trial. Since the sensory information comes at the start of the trial, but influences responses later, it indicates a long time scale or working memory, which from the point of view of the hierarchy is more closely associated with the "top". So there is a good, but not watertight, argument that the attentional effects in sensory areas like V1, MT, V4 are "top-down". I'm not sure how the paper I mentioned fits in, though there has long been evidence for interaction between attention and learning, eg. http://www.ncbi.nlm.nih.gov/pubmed/20060771 and some speculation on how it may work at the circuit level, eg. http://www.ncbi.nlm.nih.gov/pubmed/25742003.
 
  • #25
Graeme M said:
My own personal intuition, and I see that it is a naïve one, is that when whatever happens internally happens, it just feels like that. There isn't anything to explain.

This is a tricky issue. On the one hand, there (presumably) has to be an end to the chain of explanation, at which we are just left with "brute facts". On the other hand, statements like the one you made there could hinder progress. For example, before Newton, many people took a similar view as to why things fall to the ground. I personally think there is plenty to explain.

Graeme M said:
Consider gravity or magnetism. We can explain the forces and make predictions of behaviours, but do we have any explanation of what they feel like? What does it feel like to be attracted to the surface of the earth? We know there is such a feeling but we don't need a physical explanation of what the feeling is in order to have a perfectly workable theory of gravity or electromagnetic forces.

I think there are at least two major problems with this view. Firstly, you are conflating what gravity feels like (a question whose answer lies in an understanding of the nervous system) which the laws of gravity (which have nothing to do with the nervous system).

Secondly, we don't even have something like a theory of gravity for consciousness. In gravity, we have a description of how things appear to behave, but no understanding of why they behave that way. For consciousness, we don't even have a theory of how consciousness behaves. To achieve something like a theory of gravity for consciousness, we would need to be able to predict, for an arbitrary physical system, what kind of experiences it has.
 
  • #26
madness said:
To achieve something like a theory of gravity for consciousness, we would need to be able to predict, for an arbitrary physical system, what kind of experiences it has.

Good point!
 
  • #27
fredreload said:
That's an interesting concept, a friend once mentioned that the neuronal structure resembles that of a transistor, maybe by understanding the individual neuronal functionality along with the delay on electrical and chemical synapse would give a better idea on how the brain works.

Neurons are mostly modeled as an Op Amp with weightings on each of its summed inputs.
 
  • #28
I found the article here http://www.gizmag.com/harvard-synaptic-transistor-artificail-intelligence/29668/ . To get the correct working mechanism you'll probably have to observe someone's brain electrical synapse 24/7. You can simulate a working brain, but you can't have the computer tells you how the brain feels unless you have a working algorithm. As mentioned in the article the brain does not work as a 0 or 1 binary input so, this makes it a lot more complicated.
 
  • #30
Hi Demy:

Re http://www.sciencedirect.com/science/article/pii/S002251931500106X, I confess that I only scanned the article. It seems to have a lot of interesting ideas, but I did not read it carefully.

The general impression I get is that the presented theory is entirely reductionistic. That is, the concept of emergent phenomena is missing. If this admittedly quick judgement is correct, I consider this to be a flaw. If I am wrong and there is a discussion of emergent phenomena somewhere, I would appreciate someone specifying some word or phrase I can search for to find it.

Regards,
Buzz
 
  • #31
Graeme M said:
Consciousness is what it feels like for the brain to continuously construct a model of attention - a model that changes moment by moment and which correlates a range of perceptual data and unconscious processing into a directive process for managing the organism's behaviour.

Hi Graeme:

The relationship among consciousness, attention, and other mental functions were at one time of sufficient interest to me I wrote an essay about it. The essay was never published, but at the time I included it among other unpublished essays on a website that is now defunct. If you are interested you can find the essay as it was preserved in an archive.
http://web.archive.org/web/20090108160834/http://users.rcn.com/bbloom/PiecesOfMyMind.htm

Regards,
Buzz
 
  • #33
atyy said:
Is Nikolic a very common surname?
Not that common. The author is a brother of the other Nikolic you know. :wink:
 
  • Like
Likes atyy and fredreload
  • #34
Demystifier said:
Not that common. The author is a brother of the other Nikolic you know. :wink:

Actually, I've read one of his papers http://www.danko-nikolic.com/wp-content/uploads/2011/09/Nikolic-Haeusler-et-al.-PLoS-Biology.pdf in some detail before - but I had not remembered the first author was a Nikolic!

That paper mentioned "fading memory". I happened to have just used that term in https://www.physicsforums.com/threads/state-space-vs-classical-control.833353/#post-5235405. I first learned about fading memory because Wolfgang Maass (your brother's coauthor) referred to the paper by Boyd and Chua in another of his papers.
 
Last edited:
  • #35
Returning to Prinz and Graziano's hypotheses, Prinz suggests that his AIR theory can be stated simply as "consciousness arises when and only when intermediate-level representations are modulated by attention".

Ignoring for the moment whether an 'intermediate-level representation' is a valid notion, something I find difficult to follow is the idea that attending gives rise to consciousness.I suspect I am getting back to my difficulty with definition.

Prinz seems to me to be saying that there is a difference between consciousness and awareness, whereas I would have thought that consciousness includes awareness (I see 'consciousness' as a spectrum).

For example, he cites various experimental results, such as masking studies, to illustrate how certain visual stimulus can be perceived by the cognitive system, but not consciously experienced (or more exactly, reported as not consciously experienced). He suggests that it is the operation of attention that provides a distinguishing mechanism such that a particular stimulus is consciously appreciated.

Here I am unclear on what is meant by "attention". Professor Graziano specifies attention as an internal process in which many signals vie for attention and only those that exceed some threshold achieve attentional status. However Prinz seems to be using attention in a more colloquial sense - that is, I attend by focusing attention on something.

I observe in my own experience that the latter kind of attention certainly brings with it a clearer sense of a thing. For example sitting at a table in a coffee shop and focusing on reading Prinz's words leads to a sense of isolation from surrounding activities. However, the world around me does not completely disappear in an experiential sense - I am still aware of the coffee shop and the movements of people in it and so on. I still am aware of the hubbub of sound around me.

As a further example, I can walk from the coffee shop to my work, while counting simultaneously from 1 to 10 and 10 to 1. This means I must focus attention on holding the numbers in my mind in two different forms - verbal and graphical - while also being aware of my surroundings in sufficient detail to find my way to work.

Now it may be that I divide and conquer by attending each process in small time slices that are not directly sequential, but still it seems to me that I am not consciously directing attention at the process of walking to work.

That leads me to conclude that attention as a biological process that mediates awareness or consciousness must fall more in the form of Graziano's description than Prinz's. All perceptual input is processed by the cognitive system at a primary level while only certain signals achieve priority for further processing into conscious experience (awareness). Thus the background around me can still be sensed (I can be aware of it) because the signal is of sufficient strength to achieve experiential status.

Prinz's idea of attention as a sort of directive act could still be valid in that such a form of attention should require allocation of more resources to the process. Perhaps it raises the signal threshold in cases of focused attention such that background signals are effectively filtered from conscious experience (for example in the well-known case of the basketball and the gorilla).

Put another way, I think I am aware of things even if not attending to them, however by attending to a thing I am definitely more aware of it. But in this sense, am I aware of a thing because I attend to it, or am I attending a thing because I am aware of it? The latter seems more reasonable. Therefore I think Graziano's idea is more consistent because in such an interpretation the process of attention leads to both background and foreground awareness as consciously discerned, whereas Prinz's idea suggests only foreground awareness can be conscious. Marrying the two ideas as I suggest above seems to resolve that.

Anyway, this is all speculation on my part. I am just illustrating my inability to quite grasp what Prinz is driving at when he states his AIR theory. Is there a formal definition for 'attention' in the sense it is used in the field of neuroscience?
 
<h2>1. What is consciousness?</h2><p>Consciousness is the state of being aware of one's surroundings and experiences. It is the subjective experience of our thoughts, feelings, and perceptions.</p><h2>2. What is the Attention Schema Theory?</h2><p>The Attention Schema Theory is a scientific theory that aims to explain how consciousness arises in the brain. It proposes that consciousness is a product of the brain's ability to create a simplified model of attention and its effects on the body and the environment.</p><h2>3. How does the Attention Schema Theory explain consciousness?</h2><p>The Attention Schema Theory suggests that the brain creates a simplified model of attention, which is used to monitor and control the body's resources and interactions with the environment. This model is then used to create a sense of self and awareness, leading to the experience of consciousness.</p><h2>4. What is the role of attention in the Attention Schema Theory?</h2><p>In the Attention Schema Theory, attention is seen as a crucial component in the creation of consciousness. It is through attention that the brain is able to select and focus on certain information, which is then used to create a model of attention that contributes to the experience of consciousness.</p><h2>5. What is the significance of the Attention Schema Theory?</h2><p>The Attention Schema Theory offers a new perspective on the nature of consciousness and has the potential to advance our understanding of this complex phenomenon. It also provides a framework for further research and could potentially have implications in various fields, such as psychology, neuroscience, and artificial intelligence.</p>

1. What is consciousness?

Consciousness is the state of being aware of one's surroundings and experiences. It is the subjective experience of our thoughts, feelings, and perceptions.

2. What is the Attention Schema Theory?

The Attention Schema Theory is a scientific theory that aims to explain how consciousness arises in the brain. It proposes that consciousness is a product of the brain's ability to create a simplified model of attention and its effects on the body and the environment.

3. How does the Attention Schema Theory explain consciousness?

The Attention Schema Theory suggests that the brain creates a simplified model of attention, which is used to monitor and control the body's resources and interactions with the environment. This model is then used to create a sense of self and awareness, leading to the experience of consciousness.

4. What is the role of attention in the Attention Schema Theory?

In the Attention Schema Theory, attention is seen as a crucial component in the creation of consciousness. It is through attention that the brain is able to select and focus on certain information, which is then used to create a model of attention that contributes to the experience of consciousness.

5. What is the significance of the Attention Schema Theory?

The Attention Schema Theory offers a new perspective on the nature of consciousness and has the potential to advance our understanding of this complex phenomenon. It also provides a framework for further research and could potentially have implications in various fields, such as psychology, neuroscience, and artificial intelligence.

Similar threads

  • Science and Math Textbooks
Replies
4
Views
1K
Replies
31
Views
7K
  • General Discussion
Replies
21
Views
5K
  • Beyond the Standard Models
Replies
14
Views
3K
  • Quantum Interpretations and Foundations
Replies
22
Views
628
Replies
1
Views
2K
  • Beyond the Standard Models
Replies
11
Views
2K
  • General Discussion
4
Replies
135
Views
20K
  • General Discussion
3
Replies
71
Views
14K
Replies
54
Views
10K
Back
Top