Consciousness and the Attention Schema Theory - Meaningful?

Click For Summary
Jesse Prinz's "The Conscious Brain" introduces the Attended Intermediate-level Representation theory, positing that consciousness arises at an intermediate sensory processing level, integrating lower-level brute processing and higher-level abstraction. Prinz suggests that emotional experiences also stem from this intermediate evaluation of bodily responses. In contrast, Michael Graziano's Attention Schema Theory argues that consciousness is a model of attention, constructed by the brain to represent what is being attended to, with awareness emerging from this model. Both theories explore the relationship between attention and consciousness, yet they differ in their explanations of how consciousness is formed and its implications for understanding subjective experience. The discussion highlights ongoing speculation and research in the field, particularly regarding the complexities of sensory processing and the challenges of defining consciousness.
  • #91
I've been busy these past few days and haven't had a chance to properly follow the discussion. On reading it through, I still feel I don't understand some basic foundations to the idea of a 'hard' problem for consciousness.

Much commentary and discussion here and elsewhere seems to me to address things such as self-awareness or language, or perhaps more exactly cognitive function, rather than consciousness. My take on the hard problem is best summarised by Pythagorean's statement:

"Really, the only aspect of consciousness that presents an epistemological challenge is the subjective experience: that matter can have feelings when it's arranged in the right way."

My idea of the hard problem is simply that there is an experience of awareness. Why should a brain that is just doing physical stuff have an experience of, for example, an external world? The external world, whatever that is, appears in my mind as "out there". My brain has an inner representation of the external world but the really curious thing is that it feels like it is out there and it is the world I am part of. That is, my interactions with this mental representation fairly accurately resemble my interactions with external objects.

Consciousness itself seems pretty straightforward, relatively speaking. That is, it seems to me to be the facility to represent the external world within a system such that the system can interact with the external world via that representation. That explains some of my earlier comments - it seems to me that any organism which can have some kind of representation of the external world and respond behaviourally to that is therefore conscious.

So, if I sense the world and react to it, I am conscious. That would be my starting point. All the extra bits that DiracPool describes are remarkable features and represent an evolving complexity to biological consciousness, but as I suggested earlier, why does that make for something extraordinary? In a biological sense, doesn't it just boil down to responding behaviourally to a representation of the world?

To me then an 'easy' problem is explaining how this representation arises mechanically, a 'hard' problem is explaining why it feels to me that I am experiencing the world.

Tononi's idea, as much as I can understand it, sounds good. But it's just a quantification model. That is, a system is conscious with a high enough phi. And it has experience if the shape in Q space is significant enough. But that doesn't address the hard problem, if the hard problem is as defined by Pythagorean. It would be useful, if it worked, to be able to predict a conscious experience within another organism. But just because the Q space shape is equivalent between an experience of mine and an experience of a blind burrowing mole only tells me that the mole is functionally conscious. It still offers no explanatory value for how I and the mole can actually come to have some feelings about the world.

I think a similar problem besets Graziano's theory, however I admit to not being quite sure what he means. If there is a Attention Schema model that informs us of an abstracted model of attention and this is what gives rise to our qualia of experience, there is still the problem of why it is that we have that experience.

Or so it seems to me. What am I missing here?
 
Biology news on Phys.org
  • #92
Hi Graeme,
Your description of the easy and hard problems of consciousness are almost correct.
Graeme M said:
Much commentary and discussion here and elsewhere seems to me to address things such as self-awareness or language, or perhaps more exactly cognitive function, rather than consciousness. My take on the hard problem is best summarised by Pythagorean's statement:

"Really, the only aspect of consciousness that presents an epistemological challenge is the subjective experience: that matter can have feelings when it's arranged in the right way."
The quote from Pythagorean is correct, abeit brief and not meant to be a comprehensive description of the hard problem or phenomenal consciousness.
My idea of the hard problem is simply that there is an experience of awareness.
Not exactly… We should use the definitions provided for “phenomenal” versus “psychological” consciousness as given by Chalmers since these also reflect the “hard problem” versus the “easy problem” respectively. The experience of awareness is only one of the phenomena picked out by phenomenal consciousness.

In his paper, “Facing up to the problem of consciousness”, Chalmers breaks up consciousness into 2 groups. The first are objectively observable. He calls these things “phenomena” which Chalmers labels as “easy”. We should all be able to agree on what is being observing when it comes to these phenomena and they should be accessible to the normal methods of science. Chalmers states:
The easy problems of consciousness include those of explaining the following phenomena:
• the ability to discriminate, categorize, and react to environmental stimuli;
• the integration of information by a cognitive system;
• the reportability of mental states;
• the ability of a system to access its own internal states;
• the focus of attention;
• the deliberate control of behavior;
• the difference between wakefulness and sleep.All of these phenomena are associated with the notion of consciousness. For example, one sometimes says that a mental state is conscious when it is verbally reportable, or when it is internally accessible. Sometimes a system is said to be conscious of some information when it has the ability to react on the basis of that information, or, more strongly, when it attends to that information, or when it can integrate that information and exploit it in the sophisticated control of behavior. We sometimes say that an action is conscious precisely when it is deliberate. Often, we say that an organism is conscious as another way of saying that it is awake.
Chalmers then quotes Nagel, “What is it like to be a bat?”:
The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.
In his book “The Conscious Mind”, Chalmers takes a slightly different tact and instead of breaking up consciousness into easy and hard phenomena, he calls them psychological consciousness and phenomenal consciousness (p-consciousness for short) respectively. His book is much more thorough and worth referring to. For p-consciousness, Chalmers lists a number of different “experiences” as follows:
Visual experiences. Among the many varieties of visual experience, color sensations stand out as the paradigm examples of conscious experience, due to their pure, seemingly ineffable qualitative nature. … Why should it feel like that? Why should it feel like anything at all? …

Other aspects of visual experience include the experience of shape, of size, of brightness and of darkness. A particularly subtle aspect is the experience of depth. … Certainly there is an intellectual story one can tell about how binocular vision allows information from each eye to be consolidated into information about distances, thus enabling more sophisticated control of action, but somehow this causal story does not reveal the way the experience is felt. Why that change in processing should be accompanied by such a remaking of my experience was mysterious to me as a ten-year-old, and is still a source of wonder today.

Auditory experiences. In some ways, sounds are even stranger than visual images. The structure of images usually corresponds to the structure of the world in a straightforward way, but sounds can seem quite independent. …

Musical experience is perhaps the richest aspect of auditory experience, although the experience of speech must be close. Music is capable of washing over and completely absorbing us, surrounding us in a way that a visual field can surround us but in which auditory experiences usually do not. …

Tactile experiences. Textures provide another of the richest quality spaces that we experience: think of the feel of velvet, and contrast tit to the texcture of cold metal, or a clammy hand, or a stubbly chin. …

Olfactory experiences. Think of the musty smell of an old wardrobe, the stench of rotting garbage, the whiff of newly mowngrass, the warm aroma of freshly baked bread. Smell is in some ways the most mysterious of all the senses due to the rich, intangible, indescribable nature of smell sensations. … It seems arbitrary that a given sort of molecule should give rise to this sort of sensation, but give rise it does.

Taste experiences. Psychophysical investigations tell us that there are only four independent dimensions of taste perception: sweet, sour bitter, and salt. But this four-dimensional space combines with our sense of smell to produce a great variety of possible experiences…

Experiences of hot and cold. An oppressively hot, humid day and a frosty winder’s day produce strikingly different qualitative experiences. Think also of the heat sensations on one’s skin from being close to a fire, and the hot-cold sensation that one gets from touching ultra cold ice.

Pain. Pain is a paradigm example of conscious experience, beloved by philosophers. Perhaps this is because pains form a very distinctive class of qualitative experiences, and are difficult to map directly onto any structure in the world or in the body, although they are usually associated with some part of the body. … There are a great variety of pain experiences from shooting pains and fierce burns through sharp pricks to dull aches.

Other bodily sensations. Pains are only the most salient kind of sensations associated with particular parts of the body. Others include headaches … hunger pangs, itches, tickles and the experience associated with the need to urinate. …

Mental imagery. Moving ever inward, toward experiences that are not associated with particular objects in the environment or the body but that are in some sense generated internally, we come to mental images. There is often a rich phenomenology associated with visual images conjured up in one’s imagination, though not nearly as detailed as those derived from direct visual perception. …

Conscious thought. Some of the things we think and believe do not have any particular qualitative feel associated with them, but many do. This applies particularly to explicit, occurant thoughts that one thinks to oneself, and to various thoughts that affect one’s stream of consciousness. …

Emotions. Emotions often have distinctive experiences associated with them. The sparkle of a happy mood, the weariness of a deep depression, the red-hot glow of a rush of anger, the melancholy of regret: all of these can affect conscious experiences profoundly, although in a much less specific way than localized experiences such as sensations. …

… Think of the rush of pleasure one feels when one gets a joke, another example is the feeling of tension one gets when watching a suspence movie, or when waiting for an important event. The butterflies in one’s stomach that can accompany nervousness also fall into this class.

The sense of self. One sometimes feels that there is something to conscious experience that transcends all these specific elements: a kind of background hum, for instance, that is somehow fundamental to consciousness and that is there even when the other components are not. … there seems to be something to the phenomenology of self, even if it is very hard to pin down.

This catalog covers a number of bases, but leaves out as much as it puts in. I have said nothing, for instance, about dreams, arousal and fatigue, intoxication, or the novel character of other drug-induced experiences. …
The best way I can describe P-consciousness is as a set of phenomena. It is that set of phenomena characterized by phenomenal experiences. The term “phenomenal consciousness” picks out the set of phenomena known as qualia, best described as being subjectively observable but not objectively observable. There is something that occurs during the operation of a conscious brain which cannot be objectively observed. These phenomena are subjective in nature and although they supervene on the brain, most will concede that they can not be measured or described by explaining what goes on within the brain such as the interactions between neurons, the resulting EM fields produced nor anything that is objectively measurable.

The alternative is to either explain phenomenal consciousness in strict physical terms (ie: so the hard problem is just another easy problem) or we dismiss phenomenal consciousness altogether (ie: eliminativism).

Chalmers, David J. "Facing up to the problem of consciousness." Journal of consciousness studies 2.3 (1995): 200-219.
http://consc.net/papers/facing.html
… My brain has an inner representation of the external world but the really curious thing is that it feels like it is out there and it is the world I am part of. That is, my interactions with this mental representation fairly accurately resemble my interactions with external objects.

Consciousness itself seems pretty straightforward, relatively speaking. That is, it seems to me to be the facility to represent the external world within a system such that the system can interact with the external world via that representation. That explains some of my earlier comments - it seems to me that any organism which can have some kind of representation of the external world and respond behaviourally to that is therefore conscious.

So, if I sense the world and react to it, I am conscious. That would be my starting point. All the extra bits that DiracPool describes are remarkable features and represent an evolving complexity to biological consciousness, but as I suggested earlier, why does that make for something extraordinary? In a biological sense, doesn't it just boil down to responding behaviourally to a representation of the world?

To me then an 'easy' problem is explaining how this representation arises mechanically, a 'hard' problem is explaining why it feels to me that I am experiencing the world.
A computational system can have a “representation” of the world without having any phenomenal experience of it. My computer for example, has a representation of one page of the internet on the screen that I'm looking at. That representation reflects the physical state of both (a small portion of) my computer and some computer it is getting the web page from. But there's no need to suggest my computer is actually having an experience of this representation. We wouldn't generally suggest that the colors on that web page are being experienced by any of the computers. Having a representation of the world bound up in the physical state of some system does not mean the system is having an experience of that representation.
Tononi's idea, as much as I can understand it, sounds good. But it's just a quantification model. That is, a system is conscious with a high enough phi. And it has experience if the shape in Q space is significant enough. But that doesn't address the hard problem, if the hard problem is as defined by Pythagorean. It would be useful, if it worked, to be able to predict a conscious experience within another organism. But just because the Q space shape is equivalent between an experience of mine and an experience of a blind burrowing mole only tells me that the mole is functionally conscious. It still offers no explanatory value for how I and the mole can actually come to have some feelings about the world.
Agreed. Tononi's theory doesn't actually say how or why some system has a phenomenal experience, it just suggests that IFF the system has a high enough phi, THEN the system must be having some sort of experience.
 
  • #93
Thanks Q-Goest.

Q_Goest said:
A computational system can have a “representation” of the world without having any phenomenal experience of it. My computer for example, has a representation of one page of the internet on the screen that I'm looking at. That representation reflects the physical state of both (a small portion of) my computer and some computer it is getting the web page from. But there's no need to suggest my computer is actually having an experience of this representation. We wouldn't generally suggest that the colors on that web page are being experienced by any of the computers. Having a representation of the world bound up in the physical state of some system does not mean the system is having an experience of that representation.

Why is it presumed that consciousness must be accompanied by subjective experience to be consciousness? If a brain consists of cells that connect via electrochemical signals, all we have is a computational network. All that can be happening is input->processing->output. The processing bit is complex to unravel, but that's just a mechanical problem. Our subjective experience however that happens and whatever it seems like is no more than part of the processing stage. We can say that subjective experience is a hard problem, but at the end of the day why is that relevant to assessing consciousness? Put another way, regardless of any mystery here, is not a brain just doing the same thing your computer is doing?

Therefore, why should we not consider your computer as being conscious? If we applied Tononi's theory to your computer and the phi value is high enough (but I assume a low Q-space presentation) then the computer might be conscious. It may not be having a subjective or phenomenal experience, but it might be conscious. I am not saying that I think a computer IS conscious, I am asking what physical distinction can we impose on a system to prevent it's claim to consciousness? And why?
 
  • #94
Graeme M said:
... Put another way, regardless of any mystery here, is not a brain just doing the same thing your computer is doing?
... I am not saying that I think a computer IS conscious, I am asking what physical distinction can we impose on a system to prevent it's claim to consciousness? And why?
Whether or not a computer can have a subjective experience has been debated for a very long time. There are good arguments on both sides of the issue but because there are so many logical dilemmas created by computationalism, there is no unanimous agreement. Going into those issues is outside the scope of this thread and is generally not supported by PF.
 
  • #95
Thanks Q_Goest. And I agree, I think the original question I posed has been well explored and further discussion of this nature is not likely to be in the spirit of PF.
 
  • #96
Thanks for joining us Q Goest!

Q_Goest said:
Agreed. Tononi's theory doesn't actually say how or why some system has a phenomenal experience, it just suggests that IFF the system has a high enough phi, THEN the system must be having some sort of experience.

The presumption is that the integration of information (in a particular way) is how consciousness arises. Tononi essentially states an equivalence between them. Just like in typical scientific discourse, we would then take this model and see if it makes predictions about consciousness (which Tononi has done with coma and sleeping patients). This is as close as science can get to any question: making models of the phenomena that "work" (robustly make successful predictions). We can never really know if the map we make really describes the territory or just works to predict its behavior (then we get ino interpretations, as with QM).

So as far as the hard problem is concerned, all we can really do in science is work on the "pretty hard" problem, which requires a careful integration of philosophy and science.
 
  • #97
Pythagorean did you post any links to papers about Tononi's work with coma and sleeping patients? I may have missed those. If not do you have any references I could chase up?
 
  • #99
Great, thanks for that.
 
  • #100
I am not sure if anyone is still following this thread, but I've read more of the Tononi papers and one question that comes to mind is that of how one could use this theory to make predictions about a particular network. It seems necessary to be able to compute the network complexity (ie nodes/connectivity) before the phi value can be computed. Wouldn't that be largely prohibitive for say a human brain given the number of nodes and possible connections? Would IIT be practically applicable for anything other than relatively simple networks?

That said, Tononi's proposal regarding information integration seems very sensible. The paper linked above by Pythagorean notes that loss of consciousness in sleep is very likely due to breakdown in overall network integration, especially between dispersed functional modules.

This paper http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003271 notes that Propofol induced unconsciousness is characterised by a loss of wide scale integration of information processing (that is, increased clustering of connectivity occurs under these conditions) and reduced efficiency in information distribution.

So there is empirical evidence for changes in consciousness through loss of integration. But then, isn't that somewhat self-evident? If I am conscious for a given state of connectivity, reducing that connectivity might reduce consciousness.

Nonetheless, what is interesting is what that says in relation to the original subject of this thread. Prinz's AIR suggests that consciousness arises from attended intermediate level representations that are instantiated neurally via what he calls gamma vectorwaves. So it is synchronous firing in the gamma frequency that facilitates consciousness, yet here Tononi specifically argues that it is integration which does this, as neural firing patterns remain detectable even in sleep.

However, I don't think I see that as especially problematic for either view. If neural cell populations need to fire in gamma frequencies to enable the AIR of Prinz, it seems reasonable to consider that as a total construct it is connectivity that plays the key role in realisation. Thus even if representation requires syncronous firing of neurons in the gamma frequencies, that of itself doesn't mean that we should be conscious of those if the total arangement is insufficient. Prinz suggests that the vividness of consciousness arises through the numbers of cells that are firing and that it can fade as the proportion of synchrony decreases.

So, on both ITT and AIR, wouldn't it make sense then that when neural correlates of representations occur at gamma frequencies, it is the overall dispersal of such synchrony across related functional modules that instantiates a conscious experience? In fact, I assume that for AIR to work, the very idea of a Gamma vectorwave requires wide network connectivity.

Presuming of course that Prinz and Tononi are right - I am certainly not able to evaluate that! I realize that I might just be stating the obvious or something already well known, or discounted, I am more trying to get my head around what these various authors are saying and whether there are points of agreement between ideas.
 

Similar threads

Replies
5
Views
3K
  • · Replies 18 ·
Replies
18
Views
587
Replies
31
Views
7K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 21 ·
Replies
21
Views
6K
  • · Replies 135 ·
5
Replies
135
Views
23K
  • · Replies 71 ·
3
Replies
71
Views
16K
Replies
1
Views
3K
Replies
54
Views
12K
  • · Replies 21 ·
Replies
21
Views
5K