Consciousness and the Attention Schema Theory - Meaningful?

In summary, the author argues that consciousness is formed at the intermediate level - the lower level is primary brute processing while the higher level is concerned with abstraction and categorization. He also suggests that attention is the mediating agent, and that awareness (his attentional model) arises in the superior temporal sulcus and the temporoparietal junction.
  • #71
madness said:
If IIT explained how experience emerges from information, using some standard physical mechanism, it would not be proposing a fundamental relationship between the physical process and the experience. Within the theory, the relationship between phi and consciousness is fundamental and does not reduce to any simpler underlying mechanisms or relationships.

I think the main problem here in trying to come up with an explanation of what consciousness "is" lies in the fact that no one can really agree on what they are trying to define. Each person comes up with as broad or narrow a definition of the term that suits their needs--i.e., that is interesting to them or that they think they can manage as far as constructing a model, the end result being that everyone ends up talking past each other.

Susan Pocket recently wrote an article dealing with a number of the models in this thread: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4243501/pdf/fnsys-08-00225.pdf

She divides contemporary consciousness models into "process" versus "vehicle" theories, with almost all currently vogue models being of the "process" variety. That is certainly reflected in most of the models discussed in this thread thus far. As I've stated in earlier posts, I'm not too enamored with Susan personally, she's kind of a grump :oldmad:. However, I'd have to say I agree mostly with her assessment of information-based process theories in the article, especially Tononi's model. There are fundamental problems with these models in their treatment of the subject.

What are those problems you might ask? Well, there's a number of them, but I think the principle problem here is the consistent reluctance of consciousness or even cognitive science researchers in general to draw a sharp distinction between the function of the human brain versus the non-human animal brain. To put it another way, I think the single biggest problem here is the idea that consciousness is a "thing" or a property of brains in general, and that what needs to be done is to figure out how neural assemblages generate this consciousness (to clarify, when I use the term "consciousness," here, I am referring to phenomenological consciousness, the kind that is characterized as the "hard problem.")

The fact is that is that we, as humans, have no idea what the consciousness of a mouse is, or a cricket, or a frog, or even a chimpanzee. We can only speak of what it is like to have a "human" consciousness. This human consciousness comes with a lot of features that non-human consciousness does not come with. To name a few of these features; the certitude of a sense of self-awareness, of language capacity, of logic-sequential thought structures, of musical ability, of mathematical ability, of a "theory of mind" capacity to project onto other individuals (and unfortunately also onto non-human animals and even inanimate objects), of the capacity to issue introspective reports of qualitative experiences and thoughts, and many others. We don't know for sure if any non-human animals have any of these capacities. So it seems highly probable to me that the phenomenological consciousness we experience is somehow wrapped up in this suite of other uniquely human capacities we possess. I think that theories that try to model consciousness as a single physical process and that are argued to apply to essentially all animal taxa are largely missing point and are at best, academic exercises likely to yield little, if any, progress toward the goal of explaining human conscious experience. These models include Tononi's information-integration theory, McFadden's EM theory, Hameroff's microtubule model, the vast number of "quantum theories" of consciousness which equate the collapse of the wave function to human sentient experience, and even Graziano's "attention schema theory," which I'm seeing as simply another process model.

From: http://aeon.co/magazine/philosophy/how-consciousness-works/

" (The attention schema theory) says that awareness is not something magical that emerges from the functioning of the brain. When you look at the colour blue, for example, your brain doesn’t generate a subjective experience of blue. Instead, it acts as a computational device. It computes a description, then attributes an experience of blue to itself. The process is all descriptions and conclusions and computations. Subjective experience, in the theory, is something like a myth that the brain tells itself. The brain insists that it has subjective experience because, when it accesses its inner data, it finds that information."

I'm not sure what this is supposed to tell me about my conscious experience or how it is different from my cat's experience. His idea of mental processes being ordered and structured like a General looking at a model of his army is interesting and probably true in a sense but, again, it tells me nothing of why I need to have a phenomenological experience of that schematic construction. It also does not tell me whether or not a macaque monkey has a similar construction and phenomenological experience of such going on in their "minds." Is there a monkey General in the macaque's brain? I submit that, until we have adequate brain-based model for how the human mind generates consciousness, and what that is empirically, it makes little sense to talk about animal consciousness at all, especially in terms of how to compare it to a human consciousness we haven't even defined yet.
 
Biology news on Phys.org
  • #72
DiracPool said:
I submit that, until we have adequate brain-based model for how the human mind generates consciousness, and what that is empirically, it makes little sense to talk about animal consciousness at all, especially in terms of how to compare it to a human consciousness we haven't even defined yet.

I don't agree whole heartedly with this. Certainly we have accessibility with humans and that grants us a faster way forward, but I don't think it's completely pointless to compare easy problem data across species.

I also think consciousness is a bit of an outdated term. We've already naively broken it into constituents. It's an umbrella term that includes the subject experience (qualia and the self) cognition, self-awareness, etc. Most of those are "easy" problems and progress is always being made with them. Really, the only aspect of consciousness that presents an epistemological challenge is the subjective experience: that matter can have feelings when it's arranged in the right way. What is that arrangement and how does it give rise to a self that experiences things, that's really the "holy grail" of inquiries into consciousness (thus, the hard problem). I suspect that the hard problem is intimately connected to the easy problem and that the more complete picture we have of the easy problem, the better we can formulate a solution to the hard problem. But there's lots of the easy problem left to solve currently (the easy problem is not easy!).
 
  • #73
DiracPool, I agree completely about the question of definition, but then the rest of your comment confuses me.

I understood that science has a fair handle on how the brain and nervous system functions in terms of physical structure and operation, and that this is the same for all creatures with a nervous system and brain. I'd ask then if something unique to the mechanisms of function has been found in human brains. Certainly complexity or arrangement or functional expression might differ by degree, but at the end of the day don't we only see the same fundamentals at work?

For myself, I would be quite open to believing that phenomenologically the experience of a human being must be shared by most mammals and birds at least. I'm not at all sure I'd accept that the human brain has something else going on that makes for those qualities listed above to have some capacity for expression in humans that is absent in other animals. I'd suggest that list really speaks to a level of intellectual capacity rather than one of experience.

If I can see red, or can feel anger or pain, or enjoy warmth, these are experiential properties that I think are subsumed within the meaning of consciousness. I'm reasonably confident other creatures can see red, be angry, feel pain or enjoy warmth.

If a creature can discern a red object and make a behavioural choice in regard to that discernment, on what basis should I consider that creature not to have experienced redness? Upon considering what it is for a human being to have these experiences we might assume that this curious property of subjective experience is some thing apart from the brain's material function, but isn't it rather anthropocentric to be unwilling to then assign that same experience to other beings when there is no evidence for some unique material property of a human brain?

That is, wouldn't it be more parsimonious to assume that in creatures with a brain and a nervous system that operate according to the same physical laws as a human brain, and about which we can predict behavioural responses to stimuli, subjective experience or consciousness is also present? Intellectual capacity may differ it is true, but consciousness as a fundamental property of a nervous system appears to me to be the simplest proposition.
 
  • #74
Graeme M said:
I'd ask then if something unique to the mechanisms of function has been found in human brains.

This begs the age-old question of is there something special about matter in biological form that yields the "spark of life" or elan vital of a living organism. Of course, for this discussion we can include sentient experience in that category. As far as we know, there is nothing magical or special about neurons that yields a special, non-physical sentient consciousness. It's all in the organization. There's nothing fundamentally unique about the human brain over other primate brains as far as it's general architecture and neurochemistry. However, there is a significant difference from other primates in terms of the proportions of it's gross structure. Specifically, that difference is the gross overdevelopment of the prefrontal cortex (PFC) and structures related to the PFC such as the lateral mediodorsal nucleus of the thamalus and post-Rolandic sensory association areas of the cortex that the PFC is directly connected with. These areas include the temporo-parietal junction (TPJ) you mentioned that Graziano discusses in his model. Although I haven't read the book you listed, the TPJ is a popular "convergenze zone" as Damasio calls them for speculation on the origins of higher cognitive functions in humans. It's not unique to Graziano's model. See: https://www.amazon.com/dp/0156010755/?tag=pfamazon01-20 However, the picture is much more complicated than simply localizing higher brain functions to certain brain regions or even small networks of regions.

The important point, though, is that if you look at the comparative neuroanatomy of primates, the human condition is not continuous with the development of pre-homo forms or even homo forms leading up to Homo erectus. The real bifurcation in brain development started with Homo erectus, and this is where to look for clues as to where "human uniqueness" came from.

Graeme M said:
I'm not at all sure I'd accept that the human brain has something else going on that makes for those qualities listed above to have some capacity for expression in humans that is absent in other animals.

This opinion is likely because you haven't studied comparative neuroanatomy and you think that conscious experience is simply a property of a network of interacting neurons principally, and only secondarily on the particular organization of those networks, which may yield more the "contents" of that sentient experience rather than the sentience itself. This is a common misconception (IMHO), and one I don't personally share.

Graeme M said:
For myself, I would be quite open to believing that phenomenologically the experience of a human being must be shared by most mammals and birds at least.

What do you base these beliefs on? Just a hunch? An opinion based on what you project is going on in the mind of a bird when she's hunting for worm? Obviously the human brain has "something else going on" than a bird or a mouse. We can organize expeditions to Mars and build stealth bombers. Birds can build a nest.

Graeme M said:
If a creature can discern a red object and make a behavioural choice in regard to that discernment, on what basis should I consider that creature not to have experienced redness?

There's the rub. It's not about whether the creature "experiences" redness, it's whether the creatures knows it is experiencing redness. That's the distinction I'm trying to draw. This in my opinion is what distinguishes human consciousness from whatever form of consciousness other animals may possess. Specifically, it is the ability for human to be aware of and reflect upon their consciousness (call it meta-perception, etc. if you will) and, most importantly, the ability of the human to give an introspective report of that sentient experience. This introspective report is the criterion that most psychologists and psychophysiologists have traditionally used and still use today to definitively qualify an introspective conscious experience. No animal other than humans to date have demonstrated this capacity. So the proof is in the pudding.

Again, this is why I said in the previous post that we are not going to get a handle on what the consciousness of nonhuman animals is like until we first understand specifically what processes in the human brain are associated with sentient experience and the ability to report that experience. Once we accomplish this, we can then compare those human brain processes to those of a target nonhuman species and see how they match up. At this point, I think we'll then have a better grasp of what's going on in that animal's head, not only as far as their cognitive capacity, but also as to what mental experiences that animal may be having. Until then, any discussion of "animal consciousness" is simply idle conjecture in my opinion. So to address the question in your thread title, "...--Meaningful?", I would answer "not so meaningful." :redface:
 
Last edited:
  • #75
madness said:
My understanding of Graziano's theory is that it proposes a mechanism which performs a function. Any such theory is, by Chalmers' definition, a solution to an easy problem. I think you are correct that, if you make the extra step to say "all systems which implement this mechanism are conscious" (and I think also some other statement such as "no systems which do not implement this mechanism are conscious") then you will have a theory which addresses the hard problem. Do you think these statements are reasonable for a proponent of Graziano's theory to make?

I would guess so too. I think Pythagorean's comments are along the same lines.

madness said:
It depends. I've recently seen people talk of the "pretty hard problem", which is to provide a theory of exactly which physical systems are conscious, by how much, and what kind of experiences they will have, but without explaining why. I'm not sure we can ever achieve a solution to the real hard problem, because there is a similar "hard problem" with any scientific theory. Chalmers' and Tononi's proposed solutions seem to attack the pretty hard problem rather than the hard problem. Noam Chomsky actually does at great job of explaining this point - .


But Tononi's proposal also does not address the "pretty hard problem", does it? It doesn't address the point you mention about "what kind of experiences they will have".
 
  • #76
atyy said:
I would guess so too. I think Pythagorean's comments are along the same lines.

Thinking again, it's a little more complicated. If Graziano subscribed to functionalism, then it should be any system which implements his proposed function. Presumably he would take a functionalist approach, but that leads to problems like the china brain.
atyy said:
But Tononi's proposal also does not address the "pretty hard problem", does it? It doesn't address the point you mention about "what kind of experiences they will have".

In addition to phi, there is the "qualia space (Q)" http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000462. This is intended to address the kind of experiences they will have.
 
  • #77
DiracPool said:
What do you base these beliefs on? Just a hunch? An opinion based on what you project is going on in the mind of a bird when she's hunting for worm? Obviously the human brain has "something else going on" than a bird or a mouse. We can organize expeditions to Mars and build stealth bombers. Birds can build a nest.

I suppose that I really don't know - perhaps as you say just a hunch. I think it comes back somewhat to what I mean by consciousness. I simply meant that if evolution has provided certain mechanisms by which brains and nervous systems can sense the world, I'd have thought it is most likely that this is done in much the same way for most animals. I don't know anything at all about comparative neuroanatomy, but does say the visual system of a rat work much the same way as that of a human?

If the general structures and processes are the same, then a rat should be aware of red, or hot, or whatever. Which is what I mean by consciousness. Humans may have more going on, but I don't see why that adds something extraordinary to the mix. To be aware that you are aware of red is really just a wrinkle on an established function, isn't it? An evolutionary improvement to enable more adaptive behaviours.

Susan Pockett wrote:
"We know we are conscious. Other humans look and act more or less like us, so when they tell us they have a particular conscious experience, we give them the benefit of the doubt. But what about a bit of hardware? Even a novice software writer could produce a piece of code that typed "I feel hot" whenever a thermostat registered a high temperature, but not many people would believe the appearance of this message meant the thermostat was experiencing hotness."

I'm not sure I'd agree that this application that reports it is hot is conscious, but then in essence isn't that all a human brain does in the same context? It senses heat, pulls together existing internal information and matches that with the new sensory data and generates a report such as "I feel hot". I am not sure I see anything much more happening there. To know that it is me that feels hot and that I have experienced this hotness seems to add little in a base process sense. What really helps is language (to share such information and build knowledge), capacity for abstract thinking (not a property of consciousness per se I think?) and functional hands (to apply knowledge in adapting the environment).

Regarding the bird and its worm, I think that broadly speaking the same things happen in her mind as happens in mine when I look for something. I perceive the world, I have a representation of what I am looking for and I match incoming data with that representation and when there is a match I grab it. My brain gives me a more useful set of routines to apply in how I go about my search, but isn't it really the same thing going on? Evolution has just given me a greater degree of functionality from the same basic toolkit.

Graziano suggests that this is exactly what evolution has done when it comes to our sense of experience. Although I only dimly grasp his idea, I think he is saying that what we think of as experience is not an actual experiential property at all. The Attention Schema is a model of the brain's process of attention - it makes us think that we are beings with awareness and experience but we aren't. We are neurons and connections and processes. We are conscious of things, or aware of things, just like other animals. But what our brains do (which may indeed be that uniqueness of the human condition) is to propose to us that we are actually aware of ourselves being aware of things.

As Graziano says, "With the evolution of this attention schema, brains have an ability to attribute to themselves not only "this object is green" or "I am a living being", but also, "I have a subjective experience of those items". (Speculations on the evolution of awareness. Journal of Cognitive Neuroscience, 2014. http://www.princeton.edu/~graziano/Graziano_JCN_2014.pdf ).

If he is right, his theory explains what it is to have a human conscious experience that seems so fundamentally subjective and why that happens. However, I think what others here have said is true - this theory wouldn't explain the hard problem. Because it doesn't tell us how it can be, for example, that I actually see the colour red in my mind or I see a representation of the external world in my mind. It tells us how it is we are aware of the awareness, but not how the awareness arises in the first place.

I do think though that I share the experiences, the awareness itself, with other mammals. I guess from what you've said you disagree with me on that. Could you explain why?
 
Last edited:
  • #78
Graeme M said:
I do think though that I share the experiences, the awareness itself, with other mammals. I guess from what you've said you disagree with me on that. Could you explain why?

I think I explained why pretty clearly in my previous post #74-- The reason is, fundamentally, that non-human animals do not give an "introspective report" of such internal mental experiences, so how can we be sure that they are, indeed, having such experiences in the same way that we humans do? And don't get caught in the trap of thinking that they don't give such introspective reports because of their lack of a voicebox or opposable thumb on their hand. That has nothing to do with it. We would able to detect such reports if they were there. Devising sophisticated techniques to look for such reports is what primatologists and "animal consciousness" researchers do for a living. Also. look at Steven Hawking, all he can do these days is twitch a cheek muscle and he can carry on a black hole war with Lenny Susskind. So I think it's clear that the inability of non-human animals to give an introspective report of their internal experiences is not due to anybody structure limitations, it due to the fact that they are not attempting to communicate such reports.

Graeme M said:
To be aware that you are aware of red is really just a wrinkle on an established function, isn't it?

I think it's a bit more than that.

In any case, I'll take a look at the Graziano reference you posted later on today and perhaps give a separate response to that in relation to the other comments in your post.
 
  • #79
The 'definition' of consciousness is generally broken into 2 parts, such as per Chalmers. The first is "psychological consciousness" and the second is "phenomenal consciousness". In simple terms, psychological consciousness is the easy problem because phenomena such as how things (ex: neurons) function or interact are objectively observable. Phenomenal consciousness is the hard problem because phenomena such as our subjective experience of red or pain or any other feeling are not objectively observable. We can have an animal that can distinguish certain wavelengths of light such as red, and therefore use that ability to perform a function, and that's easy because we can, in principal, observe how neurons interact to create that function. But why an animal or human should have some subjective experience of red at all is the hard part. Why red should have some subjective quality as it does as opposed to another color or as opposed to some other subjective experience altogether is what needs to be explained if one claims they are explaining the hard problem.
 
  • #80
madness said:
Thinking again, it's a little more complicated. If Graziano subscribed to functionalism, then it should be any system which implements his proposed function. Presumably he would take a functionalist approach, but that leads to problems like the china brain.

Is functionalism incompatible with Tonini's theory? I don't see why one couldn't have a China brain configured to have a certain amount of phi.

madness said:
In addition to phi, there is the "qualia space (Q)" http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000462. This is intended to address the kind of experiences they will have.

Here he's really talking about the report of a subjective experience. He can't say whether the subjective experiences corresponding to the same report are really the same subjective experience.
 
  • #81
atyy said:
Is functionalism incompatible with Tonini's theory? I don't see why one couldn't have a China brain configured to have a certain amount of phi.

It is incompatible:

http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003588
"there can be true “zombies” – unconscious feed-forward systems that are functionally equivalent to conscious complexes"

Which is the basis of a major criticism in this paper:

http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004286
"Since IIT is not a form of computational functionalism, it is vulnerable to fading/dancing qualia arguments"

I'm not sure if the china brain specificaly could have high phi or not.

atyy said:
Here he's really talking about the report of a subjective experience. He can't say whether the subjective experiences corresponding to the same report are really the same subjective experience.

Why do you think that he is talking about reports? It seems clear to me that he is talking about experiences rather than reports. For example, why would a behavioural report correspond to a geometrical shape in an information space? That appears unreasonable to me.
 
  • #82
madness said:
It is incompatible:

http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003588
"there can be true “zombies” – unconscious feed-forward systems that are functionally equivalent to conscious complexes"

Which is the basis of a major criticism in this paper:

http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004286
"Since IIT is not a form of computational functionalism, it is vulnerable to fading/dancing qualia arguments"

I'm not sure if the china brain specificaly could have high phi or not.

I agree with your statements as Tononi's reading of his own theory. However, I am unsure if Tononi has interpreted his theory correctly. On the other hand, since I'm skeptical that his theory solves the hard problem, or even the pretty hard problem, it would be like being skeptical of the wrong interpretation of the wrong theory.

madness said:
Why do you think that he is talking about reports? It seems clear to me that he is talking about experiences rather than reports. For example, why would a behavioural report correspond to a geometrical shape in an information space? That appears unreasonable to me.

Can the theory be falsified? My thinking was that if it can be falsified, then it must be falsified by reports, and if it is about experiences then it cannot be falsified. For example, he begins by saying "By contrast, the cerebellum - a part of our brain as complicated and even richer in neurons than the cortex – does not seem to generate much experience at all: if the cerebellum has to be removed surgically, consciousness is hardly affected. What is special about the corticothalamic system, then, that is not shared by the cerebellum?". How can we know that the quality of consciousness is hardly affected when the cerebellum is removed? Is that a testable statement? Or does he only mean that reports of the quality of consciousness are hardly affected? If the two cannot be distinguished, then it seems that all that is really been addressed is the easy problem.
 
  • #83
atyy said:
"By contrast, the cerebellum - a part of our brain as complicated and even richer in neurons than the cortex – does not seem to generate much experience at all: if the cerebellum has to be removed surgically, consciousness is hardly affected. What is special about the corticothalamic system, then, that is not shared by the cerebellum?". How can we know that the quality of consciousness is hardly affected when the cerebellum is removed? Is that a testable statement? Or does he only mean that reports of the quality of consciousness are hardly affected? If the two cannot be distinguished, then it seems that all that is really been addressed is the easy problem.

There's no guarantee that the cerebellum doesn't possesses it's own kind of rudimentary consciousness or that the conscious experience we have as individuals is a conglomerate of many such systems.

But... I still think we're touching on the hard problem if we find consistencies in reporting - we're not guaranteed of the result, but neither or we for theories of gravity or electrodynamics. In the end, they're all just models abstracted to terms of human thinking. And to that end, I don't think the hard problem is much different than any other problem in physics - we simply can't know whether our models are correct in the way we conceptually view them, we can only observe when our models "work" (successfully make valid, consistent predictions).
 
  • #84
atyy said:
I agree with your statements as Tononi's reading of his own theory. However, I am unsure if Tononi has interpreted his theory correctly. On the other hand, since I'm skeptical that his theory solves the hard problem, or even the pretty hard problem, it would be like being skeptical of the wrong interpretation of the wrong theory.

I think it's more a prediction than an interpretation. Basically, for a system with a given input-output relation, the value of Phi can be very different depending on what goes on in the middle. If Phi is the conscious level, and a "function" is an input-output relationship, then IIT is not a functionalist theory.

atyy said:
Can the theory be falsified? My thinking was that if it can be falsified, then it must be falsified by reports, and if it is about experiences then it cannot be falsified. For example, he begins by saying "By contrast, the cerebellum - a part of our brain as complicated and even richer in neurons than the cortex – does not seem to generate much experience at all: if the cerebellum has to be removed surgically, consciousness is hardly affected. What is special about the corticothalamic system, then, that is not shared by the cerebellum?". How can we know that the quality of consciousness is hardly affected when the cerebellum is removed? Is that a testable statement? Or does he only mean that reports of the quality of consciousness are hardly affected? If the two cannot be distinguished, then it seems that all that is really been addressed is the easy problem.

The theory is testable only insofar as we can rely on behioural reports. I'm not sure that this is equivalent to saying this it is a theory of behavioural reports, however.
 
  • #85
madness said:
I think it's more a prediction than an interpretation. Basically, for a system with a given input-output relation, the value of Phi can be very different depending on what goes on in the middle. If Phi is the conscious level, and a "function" is an input-output relationship, then IIT is not a functionalist theory.

Yes, if by functionalism one means "input-output relation", but I wasn't sure exactly how strictly the term was being used, and whether it included the china-brain.

Anyway now that I think I understand the terminology a bit better, I do agree that phi characterizes an equivalence class of dynamical systems in a way that the "internal structure" matters. I believe that some china brains can be configured to have high phi. I also do not believe Graziano's theory is pure functionalism, since it is a theory of internal structure.

madness said:
The theory is testable only insofar as we can rely on behioural reports. I'm not sure that this is equivalent to saying this it is a theory of behavioural reports, however.

Yes, it may not be a theory of behavioral reports. But to solve the hard problem or the harder aspects of the pretty hard problem, that uncertainty should be removed.
 
  • #86
atyy said:
Yes, if by functionalism one means "input-output relation", but I wasn't sure exactly how strictly the term was being used, and whether it included the china-brain.

Anyway now that I think I understand the terminology a bit better, I do agree that phi characterizes an equivalence class of dynamical systems in a way that the "internal structure" matters. I believe that some china brains can be configured to have high phi. I also do not believe Graziano's theory is pure functionalism, since it is a theory of internal structure.

My point was not that functionalism is a problem for Graziano, but that he does not solve the hard problem unless you add some extra assumptions, such as "functionalism is true". For example, in Graziano's theory, is a china brain version of his system conscious? What about a feedforward equivalent system which implements the same function? I don't think Graziano's theory, on its own, can answer these questions, meaning that it does not solve the hard problem (or even the pretty hard problem).

At best, Graziano gives an explanation of why humans are conscious. But even that I disagree with, because it is really a theory of why humans would report conscious experience. It might even be a form of eliminative materialism (or to put it another way, it is consistent with eliminative materialsm, but also consistent with almost any other philosophy of mind).

In my opinion, Graziano's theory says nothing interesting about consciousness. An explanation of behavioural reports has never been a deep and interesting question. The only way to get anything more out of his theory is to add something like "functionalism", "eliminative materialsm" or some other well established philosophy.

There is only one case in which I can see Graziano's theory as providing an attempt at the hard problem. If we take Graziano's theory, take functionalism, and take the view that a system is conscious if and only if it implements Graziano's proposed function, then we can determine whether an arbitrary physical system is conscious. For any other interpretation I think his theory would fall short of the mark.
 
Last edited:
  • #87
madness said:
My point was not that functionalism is a problem for Graziano, but that he does not solve the hard problem unless you add some extra assumptions, such as "functionalism is true". For example, in Graziano's theory, is a china brain version of his system conscious? What about a feedforward equivalent system which implements the same function? I don't think Graziano's theory, on its own, can answer these questions, meaning that it does not solve the hard problem (or even the pretty hard problem).

At best, Graziano gives an explanation of why humans are conscious. But even that I disagree with, because it is really a theory of why humans would report conscious experience. It might even be a form of eliminative materialism (or to put it another way, it is consistent with eliminative materialsm, but also consistent with almost any other philosophy of mind).

In my opinion, Graziano's theory says nothing interesting about consciousness. An explanation of behavioural reports has never been a deep and interesting question. The only way to get anything more out of his theory is to add something like "functionalism", "eliminative materialsm" or some other well established philosophy.

There is only one case in which I can see Graziano's theory as providing an attempt at the hard problem. If we take Graziano's theory, take functionalism, and take the view that a system is conscious if and only if it implements Graziano's proposed function, then we can determine whether an arbitrary physical system is conscious. For any other interpretation I think his theory would fall short of the mark.

Yes, more or less (maybe less, but that's not the point) agree with all that. My main puzzlement is why you and Pythagorean think Tononi comes any closer to overcoming these problems. But it seems we have at least some agreement that Tononi does not address the hard problem, only the pretty hard problem at best. Also, I think we agree that while Tononi's qualia space may be more than a theory of reports, it is not clear that it is not just a theory of reports.
 
  • #88
atyy said:
My main puzzlement is why you and Pythagorean think Tononi comes any closer to overcoming these problems.

In my case, it's the way Tononi frames the question, not so much the way he tries to answer it. He's framing it in terms of quantifiable physical events, whereas Graziano's explanation is (or appears to be) more conceptual and qualitative.
 
  • #89
atyy said:
Yes, more or less (maybe less, but that's not the point) agree with all that. My main puzzlement is why you and Pythagorean think Tononi comes any closer to overcoming these problems.

My reasons are similar to Pythagorean's. If Tononi's theory were correct, it would solve the (pretty) hard problem. Whether or not Graziano's theory is correct has no bearing on the hard problem.

atyy said:
But it seems we have at least some agreement that Tononi does not address the hard problem, only the pretty hard problem at best.

I could make a similar claim about Newton's theory of gravity, or Einstein's general relativity. Solving the "pretty hard problem" is in my opinion the main goal of a scientific theory of consciousness.

atyy said:
Also, I think we agree that while Tononi's qualia space may be more than a theory of reports, it is not clear that it is not just a theory of reports.

To me it is clear that it's not a theory of reports at all. It's a bit like saying quantum mechanics is a theory of measurements and has nothing to do with subatomic particles.
 
  • #90
madness said:
To me it is clear that it's not a theory of reports at all. It's a bit like saying quantum mechanics is a theory of measurements and has nothing to do with subatomic particles.

Yes, quantum mechanics (in the orthodox interpretation) is a theory of measurements and has nothing to do with subatomic particles :oldwink:

To me, the most interesting bits of physics are questions of interpretation, eg. how can we make quantum mechanics into a a theory of reality? how can we make sense of renormalization in quantum field theory? how can we understand why some physical systems are conscious? The outlining of possible answers to the first two questions were conceptual breakthroughs (by Bohm and Wilson respectively), and I expect the last also needs one.
 
  • #91
I've been busy these past few days and haven't had a chance to properly follow the discussion. On reading it through, I still feel I don't understand some basic foundations to the idea of a 'hard' problem for consciousness.

Much commentary and discussion here and elsewhere seems to me to address things such as self-awareness or language, or perhaps more exactly cognitive function, rather than consciousness. My take on the hard problem is best summarised by Pythagorean's statement:

"Really, the only aspect of consciousness that presents an epistemological challenge is the subjective experience: that matter can have feelings when it's arranged in the right way."

My idea of the hard problem is simply that there is an experience of awareness. Why should a brain that is just doing physical stuff have an experience of, for example, an external world? The external world, whatever that is, appears in my mind as "out there". My brain has an inner representation of the external world but the really curious thing is that it feels like it is out there and it is the world I am part of. That is, my interactions with this mental representation fairly accurately resemble my interactions with external objects.

Consciousness itself seems pretty straightforward, relatively speaking. That is, it seems to me to be the facility to represent the external world within a system such that the system can interact with the external world via that representation. That explains some of my earlier comments - it seems to me that any organism which can have some kind of representation of the external world and respond behaviourally to that is therefore conscious.

So, if I sense the world and react to it, I am conscious. That would be my starting point. All the extra bits that DiracPool describes are remarkable features and represent an evolving complexity to biological consciousness, but as I suggested earlier, why does that make for something extraordinary? In a biological sense, doesn't it just boil down to responding behaviourally to a representation of the world?

To me then an 'easy' problem is explaining how this representation arises mechanically, a 'hard' problem is explaining why it feels to me that I am experiencing the world.

Tononi's idea, as much as I can understand it, sounds good. But it's just a quantification model. That is, a system is conscious with a high enough phi. And it has experience if the shape in Q space is significant enough. But that doesn't address the hard problem, if the hard problem is as defined by Pythagorean. It would be useful, if it worked, to be able to predict a conscious experience within another organism. But just because the Q space shape is equivalent between an experience of mine and an experience of a blind burrowing mole only tells me that the mole is functionally conscious. It still offers no explanatory value for how I and the mole can actually come to have some feelings about the world.

I think a similar problem besets Graziano's theory, however I admit to not being quite sure what he means. If there is a Attention Schema model that informs us of an abstracted model of attention and this is what gives rise to our qualia of experience, there is still the problem of why it is that we have that experience.

Or so it seems to me. What am I missing here?
 
  • #92
Hi Graeme,
Your description of the easy and hard problems of consciousness are almost correct.
Graeme M said:
Much commentary and discussion here and elsewhere seems to me to address things such as self-awareness or language, or perhaps more exactly cognitive function, rather than consciousness. My take on the hard problem is best summarised by Pythagorean's statement:

"Really, the only aspect of consciousness that presents an epistemological challenge is the subjective experience: that matter can have feelings when it's arranged in the right way."
The quote from Pythagorean is correct, abeit brief and not meant to be a comprehensive description of the hard problem or phenomenal consciousness.
My idea of the hard problem is simply that there is an experience of awareness.
Not exactly… We should use the definitions provided for “phenomenal” versus “psychological” consciousness as given by Chalmers since these also reflect the “hard problem” versus the “easy problem” respectively. The experience of awareness is only one of the phenomena picked out by phenomenal consciousness.

In his paper, “Facing up to the problem of consciousness”, Chalmers breaks up consciousness into 2 groups. The first are objectively observable. He calls these things “phenomena” which Chalmers labels as “easy”. We should all be able to agree on what is being observing when it comes to these phenomena and they should be accessible to the normal methods of science. Chalmers states:
The easy problems of consciousness include those of explaining the following phenomena:
• the ability to discriminate, categorize, and react to environmental stimuli;
• the integration of information by a cognitive system;
• the reportability of mental states;
• the ability of a system to access its own internal states;
• the focus of attention;
• the deliberate control of behavior;
• the difference between wakefulness and sleep.All of these phenomena are associated with the notion of consciousness. For example, one sometimes says that a mental state is conscious when it is verbally reportable, or when it is internally accessible. Sometimes a system is said to be conscious of some information when it has the ability to react on the basis of that information, or, more strongly, when it attends to that information, or when it can integrate that information and exploit it in the sophisticated control of behavior. We sometimes say that an action is conscious precisely when it is deliberate. Often, we say that an organism is conscious as another way of saying that it is awake.
Chalmers then quotes Nagel, “What is it like to be a bat?”:
The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.
In his book “The Conscious Mind”, Chalmers takes a slightly different tact and instead of breaking up consciousness into easy and hard phenomena, he calls them psychological consciousness and phenomenal consciousness (p-consciousness for short) respectively. His book is much more thorough and worth referring to. For p-consciousness, Chalmers lists a number of different “experiences” as follows:
Visual experiences. Among the many varieties of visual experience, color sensations stand out as the paradigm examples of conscious experience, due to their pure, seemingly ineffable qualitative nature. … Why should it feel like that? Why should it feel like anything at all? …

Other aspects of visual experience include the experience of shape, of size, of brightness and of darkness. A particularly subtle aspect is the experience of depth. … Certainly there is an intellectual story one can tell about how binocular vision allows information from each eye to be consolidated into information about distances, thus enabling more sophisticated control of action, but somehow this causal story does not reveal the way the experience is felt. Why that change in processing should be accompanied by such a remaking of my experience was mysterious to me as a ten-year-old, and is still a source of wonder today.

Auditory experiences. In some ways, sounds are even stranger than visual images. The structure of images usually corresponds to the structure of the world in a straightforward way, but sounds can seem quite independent. …

Musical experience is perhaps the richest aspect of auditory experience, although the experience of speech must be close. Music is capable of washing over and completely absorbing us, surrounding us in a way that a visual field can surround us but in which auditory experiences usually do not. …

Tactile experiences. Textures provide another of the richest quality spaces that we experience: think of the feel of velvet, and contrast tit to the texcture of cold metal, or a clammy hand, or a stubbly chin. …

Olfactory experiences. Think of the musty smell of an old wardrobe, the stench of rotting garbage, the whiff of newly mowngrass, the warm aroma of freshly baked bread. Smell is in some ways the most mysterious of all the senses due to the rich, intangible, indescribable nature of smell sensations. … It seems arbitrary that a given sort of molecule should give rise to this sort of sensation, but give rise it does.

Taste experiences. Psychophysical investigations tell us that there are only four independent dimensions of taste perception: sweet, sour bitter, and salt. But this four-dimensional space combines with our sense of smell to produce a great variety of possible experiences…

Experiences of hot and cold. An oppressively hot, humid day and a frosty winder’s day produce strikingly different qualitative experiences. Think also of the heat sensations on one’s skin from being close to a fire, and the hot-cold sensation that one gets from touching ultra cold ice.

Pain. Pain is a paradigm example of conscious experience, beloved by philosophers. Perhaps this is because pains form a very distinctive class of qualitative experiences, and are difficult to map directly onto any structure in the world or in the body, although they are usually associated with some part of the body. … There are a great variety of pain experiences from shooting pains and fierce burns through sharp pricks to dull aches.

Other bodily sensations. Pains are only the most salient kind of sensations associated with particular parts of the body. Others include headaches … hunger pangs, itches, tickles and the experience associated with the need to urinate. …

Mental imagery. Moving ever inward, toward experiences that are not associated with particular objects in the environment or the body but that are in some sense generated internally, we come to mental images. There is often a rich phenomenology associated with visual images conjured up in one’s imagination, though not nearly as detailed as those derived from direct visual perception. …

Conscious thought. Some of the things we think and believe do not have any particular qualitative feel associated with them, but many do. This applies particularly to explicit, occurant thoughts that one thinks to oneself, and to various thoughts that affect one’s stream of consciousness. …

Emotions. Emotions often have distinctive experiences associated with them. The sparkle of a happy mood, the weariness of a deep depression, the red-hot glow of a rush of anger, the melancholy of regret: all of these can affect conscious experiences profoundly, although in a much less specific way than localized experiences such as sensations. …

… Think of the rush of pleasure one feels when one gets a joke, another example is the feeling of tension one gets when watching a suspence movie, or when waiting for an important event. The butterflies in one’s stomach that can accompany nervousness also fall into this class.

The sense of self. One sometimes feels that there is something to conscious experience that transcends all these specific elements: a kind of background hum, for instance, that is somehow fundamental to consciousness and that is there even when the other components are not. … there seems to be something to the phenomenology of self, even if it is very hard to pin down.

This catalog covers a number of bases, but leaves out as much as it puts in. I have said nothing, for instance, about dreams, arousal and fatigue, intoxication, or the novel character of other drug-induced experiences. …
The best way I can describe P-consciousness is as a set of phenomena. It is that set of phenomena characterized by phenomenal experiences. The term “phenomenal consciousness” picks out the set of phenomena known as qualia, best described as being subjectively observable but not objectively observable. There is something that occurs during the operation of a conscious brain which cannot be objectively observed. These phenomena are subjective in nature and although they supervene on the brain, most will concede that they can not be measured or described by explaining what goes on within the brain such as the interactions between neurons, the resulting EM fields produced nor anything that is objectively measurable.

The alternative is to either explain phenomenal consciousness in strict physical terms (ie: so the hard problem is just another easy problem) or we dismiss phenomenal consciousness altogether (ie: eliminativism).

Chalmers, David J. "Facing up to the problem of consciousness." Journal of consciousness studies 2.3 (1995): 200-219.
http://consc.net/papers/facing.html
… My brain has an inner representation of the external world but the really curious thing is that it feels like it is out there and it is the world I am part of. That is, my interactions with this mental representation fairly accurately resemble my interactions with external objects.

Consciousness itself seems pretty straightforward, relatively speaking. That is, it seems to me to be the facility to represent the external world within a system such that the system can interact with the external world via that representation. That explains some of my earlier comments - it seems to me that any organism which can have some kind of representation of the external world and respond behaviourally to that is therefore conscious.

So, if I sense the world and react to it, I am conscious. That would be my starting point. All the extra bits that DiracPool describes are remarkable features and represent an evolving complexity to biological consciousness, but as I suggested earlier, why does that make for something extraordinary? In a biological sense, doesn't it just boil down to responding behaviourally to a representation of the world?

To me then an 'easy' problem is explaining how this representation arises mechanically, a 'hard' problem is explaining why it feels to me that I am experiencing the world.
A computational system can have a “representation” of the world without having any phenomenal experience of it. My computer for example, has a representation of one page of the internet on the screen that I'm looking at. That representation reflects the physical state of both (a small portion of) my computer and some computer it is getting the web page from. But there's no need to suggest my computer is actually having an experience of this representation. We wouldn't generally suggest that the colors on that web page are being experienced by any of the computers. Having a representation of the world bound up in the physical state of some system does not mean the system is having an experience of that representation.
Tononi's idea, as much as I can understand it, sounds good. But it's just a quantification model. That is, a system is conscious with a high enough phi. And it has experience if the shape in Q space is significant enough. But that doesn't address the hard problem, if the hard problem is as defined by Pythagorean. It would be useful, if it worked, to be able to predict a conscious experience within another organism. But just because the Q space shape is equivalent between an experience of mine and an experience of a blind burrowing mole only tells me that the mole is functionally conscious. It still offers no explanatory value for how I and the mole can actually come to have some feelings about the world.
Agreed. Tononi's theory doesn't actually say how or why some system has a phenomenal experience, it just suggests that IFF the system has a high enough phi, THEN the system must be having some sort of experience.
 
  • #93
Thanks Q-Goest.

Q_Goest said:
A computational system can have a “representation” of the world without having any phenomenal experience of it. My computer for example, has a representation of one page of the internet on the screen that I'm looking at. That representation reflects the physical state of both (a small portion of) my computer and some computer it is getting the web page from. But there's no need to suggest my computer is actually having an experience of this representation. We wouldn't generally suggest that the colors on that web page are being experienced by any of the computers. Having a representation of the world bound up in the physical state of some system does not mean the system is having an experience of that representation.

Why is it presumed that consciousness must be accompanied by subjective experience to be consciousness? If a brain consists of cells that connect via electrochemical signals, all we have is a computational network. All that can be happening is input->processing->output. The processing bit is complex to unravel, but that's just a mechanical problem. Our subjective experience however that happens and whatever it seems like is no more than part of the processing stage. We can say that subjective experience is a hard problem, but at the end of the day why is that relevant to assessing consciousness? Put another way, regardless of any mystery here, is not a brain just doing the same thing your computer is doing?

Therefore, why should we not consider your computer as being conscious? If we applied Tononi's theory to your computer and the phi value is high enough (but I assume a low Q-space presentation) then the computer might be conscious. It may not be having a subjective or phenomenal experience, but it might be conscious. I am not saying that I think a computer IS conscious, I am asking what physical distinction can we impose on a system to prevent it's claim to consciousness? And why?
 
  • #94
Graeme M said:
... Put another way, regardless of any mystery here, is not a brain just doing the same thing your computer is doing?
... I am not saying that I think a computer IS conscious, I am asking what physical distinction can we impose on a system to prevent it's claim to consciousness? And why?
Whether or not a computer can have a subjective experience has been debated for a very long time. There are good arguments on both sides of the issue but because there are so many logical dilemmas created by computationalism, there is no unanimous agreement. Going into those issues is outside the scope of this thread and is generally not supported by PF.
 
  • #95
Thanks Q_Goest. And I agree, I think the original question I posed has been well explored and further discussion of this nature is not likely to be in the spirit of PF.
 
  • #96
Thanks for joining us Q Goest!

Q_Goest said:
Agreed. Tononi's theory doesn't actually say how or why some system has a phenomenal experience, it just suggests that IFF the system has a high enough phi, THEN the system must be having some sort of experience.

The presumption is that the integration of information (in a particular way) is how consciousness arises. Tononi essentially states an equivalence between them. Just like in typical scientific discourse, we would then take this model and see if it makes predictions about consciousness (which Tononi has done with coma and sleeping patients). This is as close as science can get to any question: making models of the phenomena that "work" (robustly make successful predictions). We can never really know if the map we make really describes the territory or just works to predict its behavior (then we get ino interpretations, as with QM).

So as far as the hard problem is concerned, all we can really do in science is work on the "pretty hard" problem, which requires a careful integration of philosophy and science.
 
  • #97
Pythagorean did you post any links to papers about Tononi's work with coma and sleeping patients? I may have missed those. If not do you have any references I could chase up?
 
  • #99
Great, thanks for that.
 
  • #100
I am not sure if anyone is still following this thread, but I've read more of the Tononi papers and one question that comes to mind is that of how one could use this theory to make predictions about a particular network. It seems necessary to be able to compute the network complexity (ie nodes/connectivity) before the phi value can be computed. Wouldn't that be largely prohibitive for say a human brain given the number of nodes and possible connections? Would IIT be practically applicable for anything other than relatively simple networks?

That said, Tononi's proposal regarding information integration seems very sensible. The paper linked above by Pythagorean notes that loss of consciousness in sleep is very likely due to breakdown in overall network integration, especially between dispersed functional modules.

This paper http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003271 notes that Propofol induced unconsciousness is characterised by a loss of wide scale integration of information processing (that is, increased clustering of connectivity occurs under these conditions) and reduced efficiency in information distribution.

So there is empirical evidence for changes in consciousness through loss of integration. But then, isn't that somewhat self-evident? If I am conscious for a given state of connectivity, reducing that connectivity might reduce consciousness.

Nonetheless, what is interesting is what that says in relation to the original subject of this thread. Prinz's AIR suggests that consciousness arises from attended intermediate level representations that are instantiated neurally via what he calls gamma vectorwaves. So it is synchronous firing in the gamma frequency that facilitates consciousness, yet here Tononi specifically argues that it is integration which does this, as neural firing patterns remain detectable even in sleep.

However, I don't think I see that as especially problematic for either view. If neural cell populations need to fire in gamma frequencies to enable the AIR of Prinz, it seems reasonable to consider that as a total construct it is connectivity that plays the key role in realisation. Thus even if representation requires syncronous firing of neurons in the gamma frequencies, that of itself doesn't mean that we should be conscious of those if the total arangement is insufficient. Prinz suggests that the vividness of consciousness arises through the numbers of cells that are firing and that it can fade as the proportion of synchrony decreases.

So, on both ITT and AIR, wouldn't it make sense then that when neural correlates of representations occur at gamma frequencies, it is the overall dispersal of such synchrony across related functional modules that instantiates a conscious experience? In fact, I assume that for AIR to work, the very idea of a Gamma vectorwave requires wide network connectivity.

Presuming of course that Prinz and Tononi are right - I am certainly not able to evaluate that! I realize that I might just be stating the obvious or something already well known, or discounted, I am more trying to get my head around what these various authors are saying and whether there are points of agreement between ideas.
 

Similar threads

  • Science and Math Textbooks
Replies
4
Views
1K
Replies
31
Views
7K
  • General Discussion
Replies
21
Views
5K
  • Beyond the Standard Models
Replies
14
Views
3K
  • Quantum Interpretations and Foundations
Replies
25
Views
1K
Replies
1
Views
2K
  • Beyond the Standard Models
Replies
11
Views
2K
  • General Discussion
4
Replies
135
Views
21K
  • General Discussion
3
Replies
71
Views
14K
Replies
54
Views
11K
Back
Top