Consciousness and the Attention Schema Theory - Meaningful?

Click For Summary
Jesse Prinz's "The Conscious Brain" introduces the Attended Intermediate-level Representation theory, positing that consciousness arises at an intermediate sensory processing level, integrating lower-level brute processing and higher-level abstraction. Prinz suggests that emotional experiences also stem from this intermediate evaluation of bodily responses. In contrast, Michael Graziano's Attention Schema Theory argues that consciousness is a model of attention, constructed by the brain to represent what is being attended to, with awareness emerging from this model. Both theories explore the relationship between attention and consciousness, yet they differ in their explanations of how consciousness is formed and its implications for understanding subjective experience. The discussion highlights ongoing speculation and research in the field, particularly regarding the complexities of sensory processing and the challenges of defining consciousness.
  • #61
madness said:
Closer to what? I think it comes closer to a theory of the type that Chalmers suggested would be required to address the hard problem. From Chalmers' original paper:

"...there is a direct isomorphism between certain physically embodied information spaces and certain phenomenal (or experiential) information spaces. From the same sort of observations that went into the principle of structural coherence, we can note that the differences between phenomenal states have a structure that corresponds directly to the differences embedded in physical processes; in particular, to those differences that make a difference down certain causal pathways implicated in global availability and control. That is, we can find the same abstract information space embedded in physical processing and in conscious experience.

This leads to a natural hypothesis: that information (or at least some information) has two basic aspects, a physical aspect and a phenomenal aspect. This has the status of a basic principle that might underlie and explain the emergence of experience from the physical. Experience arises by virtue of its status as one aspect of information, when the other aspect is found embodied in physical processing."


If you mean closer to a correct theory of consciousness then I'm not sure.

How is it any different the basic idea behind Graziano's if one treats Graziano's boxes as outlining information flow between informational processing units?

Also, is it clear that one should accept Chalmers's proposed form of solution to the hard problem (because it seems that Graziano's ideas could be mapped onto Chalmer's proposed form of solution)? [I don't consider the hard problem, if it exists, to be defined by Chalmers, only to be named by him. I would include other arguments including Searle's "Chinese room argument" or Block's "China Brain" that you mentioned, as well as the vaguer thoughts expressed by Witten above].
 
Last edited:
Biology news on Phys.org
  • #62
Pythagorean said:
But that makes Graziano's approach idiosyncratic and anthropocentric. If we want to be able to test consciousness in a robot, we need a more generalized measurement of consciousness. Looking at information flow and boundaries is a more generalized measurement. Again, it doesn't appear Tonini's got it right, but Graziano's approach would be difficult to apply outside of humans.

In specifics perhaps, but if you look at just the first figure linked in the Graziano paper linked in the OP, why couldn't that be fleshed out and applied to robots?
 
  • #63
atyy said:
In specifics perhaps, but if you look at just the first figure linked in the Graziano paper linked in the OP, why couldn't that be fleshed out and applied to robots?

I was actually thinking about this. Not in reference to the figure, but in general. If Graziano's conjecture was framed in terms of information flow, it could be abstracted to look similar to Tononi's. But Graziano seems to frame a lot of his model phenomenologically in terms of human experience and psychology (which is what I meant by idiosyncratic).
 
  • #64
atyy said:
How is it any different the basic idea behind Graziano's if one treats Graziano's boxes as outlining information flow between informational processing units?

The major difference is that IIT proposes a fundamental relationship between some feature of physical systems and the associated conscious experience. Crucially, this relationship could not have been deduced from, or reduced to, the standard laws of physics which determine the activity of that physical system.

Graziano, on the other hand, proposes a mechanism which performs a function. This, by definition of Chalmers' original formulation of the hard an easy problems, is a solution to an "easy problem" by definition.
atyy said:
Also, is it clear that one should accept Chalmers's proposed form of solution to the hard problem (because it seems that Graziano's ideas could be mapped onto Chalmer's proposed form of solution)? [I don't consider the hard problem, if it exists, to be defined by Chalmers, only to be named by him. I would include other arguments including Searle's "Chinese room argument" or Block's "China Brain" that you mentioned, as well as the vaguer thoughts expressed by Witten above].

My point is that Graziano's ideas could not be mapped onto any solution to Chalmers' formulation of the hard problem, as a result of his formulation of the problem rather than his form of a solution. Of course, you could reformulate Chalmers' definition of the problem, but I find that to be evasive unless you openly admit to a deflationary approach.

In my opinion, Chalmers has put forward the most carefully argued and comprehensive account of the hard problem, at least out of those I have read (http://www.amazon.com/dp/0195117891/?tag=pfamazon01-20). That's not to diminish the contributions of others (I have mentioned Nagel in this thread several times), but I consider that book to be pretty definitive on the issue.
 
  • #65
madness said:
The major difference is that IIT proposes a fundamental relationship between some feature of physical systems and the associated conscious experience.

madness can you briefly summarise how IIT does this? From my pretty sketchy take on what I read, IIT can quantify whether a system is conscious if it has a large enough phi value. But what does it say about how an experience emerges from the informational potential of the system? What measure quantifies that?
 
  • #66
Graeme M said:
madness can you briefly summarise how IIT does this? From my pretty sketchy take on what I read, IIT can quantify whether a system is conscious if it has a large enough phi value. But what does it say about how an experience emerges from the informational potential of the system? What measure quantifies that?

This is precisely the point. If IIT explained how experience emerges from information, using some standard physical mechanism, it would not be proposing a fundamental relationship between the physical process and the experience. Within the theory, the relationship between phi and consciousness is fundamental and does not reduce to any simpler underlying mechanisms or relationships.
 
  • #68
madness said:
My point is that Graziano's ideas could not be mapped onto any solution to Chalmers' formulation of the hard problem, as a result of his formulation of the problem rather than his form of a solution. Of course, you could reformulate Chalmers' definition of the problem, but I find that to be evasive unless you openly admit to a deflationary approach.

In my opinion, Chalmers has put forward the most carefully argued and comprehensive account of the hard problem, at least out of those I have read (http://www.amazon.com/dp/0195117891/?tag=pfamazon01-20). That's not to diminish the contributions of others (I have mentioned Nagel in this thread several times), but I consider that book to be pretty definitive on the issue.

Why can't Graziano's ideas be mapped onto Chalmer's proposed solution form? All Tononi is doing is saying some configuration characterized by phi is conscious. One could just as easily say in the spirit of Graziano, that anything that has a model of itself interacting with the world is conscious. (See also Pythgorean's post #63.)

My criticism is that I do not believe Chalmer's proposed form to the hard problem is satisfactory (consciousness is "nothing but" X, where X is some equivalence class of dynamical systems). If we believe Chalmer's proposed form of solution, it is difficult to see how the hard problem is hard, since why wouldn't we already accept Graziano's or Tononi's proposals? Even Penrose's would be plausible. Chalmer's idea is essentially that the hard problem should be solved by a definition. But surely what the hard problem is asking for is an explanation - and Graziano's comes far closer than Tononi to that.
 
  • #69
atyy said:
Why can't Graziano's ideas be mapped onto Chalmer's proposed solution form? All Tononi is doing is saying some configuration characterized by phi is conscious. One could just as easily say in the spirit of Graziano, that anything that has a model of itself interacting with the world is conscious. (See also Pythgorean's post #63.)

My understanding of Graziano's theory is that it proposes a mechanism which performs a function. Any such theory is, by Chalmers' definition, a solution to an easy problem. I think you are correct that, if you make the extra step to say "all systems which implement this mechanism are conscious" (and I think also some other statement such as "no systems which do not implement this mechanism are conscious") then you will have a theory which addresses the hard problem. Do you think these statements are reasonable for a proponent of Graziano's theory to make?

atyy said:
My criticism is that I do not believe Chalmer's proposed form to the hard problem is satisfactory (consciousness is "nothing but" X, where X is some equivalence class of dynamical systems). If we believe Chalmer's proposed form of solution, it is difficult to see how the hard problem is hard, since why wouldn't we already accept Graziano's or Tononi's proposals? Even Penrose's would be plausible. Chalmer's idea is essentially that the hard problem should be solved by a definition. But surely what the hard problem is asking for is an explanation - and Graziano's comes far closer than Tononi to that.

It depends. I've recently seen people talk of the "pretty hard problem", which is to provide a theory of exactly which physical systems are conscious, by how much, and what kind of experiences they will have, but without explaining why. I'm not sure we can ever achieve a solution to the real hard problem, because there is a similar "hard problem" with any scientific theory. Chalmers' and Tononi's proposed solutions seem to attack the pretty hard problem rather than the hard problem. Noam Chomsky actually does at great job of explaining this point - .
 
  • #70
atyy said:
Chalmer's idea is essentially that the hard problem should be solved by a definition. But surely what the hard problem is asking for is an explanation - and Graziano's comes far closer than Tononi to that.

My issue with Graziano's is that it comes "too close" I guess. More accurately, that it's some kind of analog to "over-fitting". Tonini proposes physical events as the mechanism (in terms of information theory) while Graziano proposes it in terms of psychological functions (that we may or may not know how to quantify the physics of). So Graziano's is more intuitively graspable, but it assumes too much for it to be generalizable to all systems. I think Tonini's "naive" approach is more suitable in that regard. Of course, the two approaches are not mutually exclusive, and perhaps equivable in some limit.

Another approach that frames brain function in terms of information flow is Friston's "Free Energy Principle for the Brain"[1]. Friston doesn't directly try to answer the hard problem, but he sets out to understand the brain in a non-anthropocentric framework.[1]
http://www.nature.com/nrn/journal/v11/n2/full/nrn2787.html
 
Last edited by a moderator:
  • #71
madness said:
If IIT explained how experience emerges from information, using some standard physical mechanism, it would not be proposing a fundamental relationship between the physical process and the experience. Within the theory, the relationship between phi and consciousness is fundamental and does not reduce to any simpler underlying mechanisms or relationships.

I think the main problem here in trying to come up with an explanation of what consciousness "is" lies in the fact that no one can really agree on what they are trying to define. Each person comes up with as broad or narrow a definition of the term that suits their needs--i.e., that is interesting to them or that they think they can manage as far as constructing a model, the end result being that everyone ends up talking past each other.

Susan Pocket recently wrote an article dealing with a number of the models in this thread: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4243501/pdf/fnsys-08-00225.pdf

She divides contemporary consciousness models into "process" versus "vehicle" theories, with almost all currently vogue models being of the "process" variety. That is certainly reflected in most of the models discussed in this thread thus far. As I've stated in earlier posts, I'm not too enamored with Susan personally, she's kind of a grump :oldmad:. However, I'd have to say I agree mostly with her assessment of information-based process theories in the article, especially Tononi's model. There are fundamental problems with these models in their treatment of the subject.

What are those problems you might ask? Well, there's a number of them, but I think the principle problem here is the consistent reluctance of consciousness or even cognitive science researchers in general to draw a sharp distinction between the function of the human brain versus the non-human animal brain. To put it another way, I think the single biggest problem here is the idea that consciousness is a "thing" or a property of brains in general, and that what needs to be done is to figure out how neural assemblages generate this consciousness (to clarify, when I use the term "consciousness," here, I am referring to phenomenological consciousness, the kind that is characterized as the "hard problem.")

The fact is that is that we, as humans, have no idea what the consciousness of a mouse is, or a cricket, or a frog, or even a chimpanzee. We can only speak of what it is like to have a "human" consciousness. This human consciousness comes with a lot of features that non-human consciousness does not come with. To name a few of these features; the certitude of a sense of self-awareness, of language capacity, of logic-sequential thought structures, of musical ability, of mathematical ability, of a "theory of mind" capacity to project onto other individuals (and unfortunately also onto non-human animals and even inanimate objects), of the capacity to issue introspective reports of qualitative experiences and thoughts, and many others. We don't know for sure if any non-human animals have any of these capacities. So it seems highly probable to me that the phenomenological consciousness we experience is somehow wrapped up in this suite of other uniquely human capacities we possess. I think that theories that try to model consciousness as a single physical process and that are argued to apply to essentially all animal taxa are largely missing point and are at best, academic exercises likely to yield little, if any, progress toward the goal of explaining human conscious experience. These models include Tononi's information-integration theory, McFadden's EM theory, Hameroff's microtubule model, the vast number of "quantum theories" of consciousness which equate the collapse of the wave function to human sentient experience, and even Graziano's "attention schema theory," which I'm seeing as simply another process model.

From: http://aeon.co/magazine/philosophy/how-consciousness-works/

" (The attention schema theory) says that awareness is not something magical that emerges from the functioning of the brain. When you look at the colour blue, for example, your brain doesn’t generate a subjective experience of blue. Instead, it acts as a computational device. It computes a description, then attributes an experience of blue to itself. The process is all descriptions and conclusions and computations. Subjective experience, in the theory, is something like a myth that the brain tells itself. The brain insists that it has subjective experience because, when it accesses its inner data, it finds that information."

I'm not sure what this is supposed to tell me about my conscious experience or how it is different from my cat's experience. His idea of mental processes being ordered and structured like a General looking at a model of his army is interesting and probably true in a sense but, again, it tells me nothing of why I need to have a phenomenological experience of that schematic construction. It also does not tell me whether or not a macaque monkey has a similar construction and phenomenological experience of such going on in their "minds." Is there a monkey General in the macaque's brain? I submit that, until we have adequate brain-based model for how the human mind generates consciousness, and what that is empirically, it makes little sense to talk about animal consciousness at all, especially in terms of how to compare it to a human consciousness we haven't even defined yet.
 
  • #72
DiracPool said:
I submit that, until we have adequate brain-based model for how the human mind generates consciousness, and what that is empirically, it makes little sense to talk about animal consciousness at all, especially in terms of how to compare it to a human consciousness we haven't even defined yet.

I don't agree whole heartedly with this. Certainly we have accessibility with humans and that grants us a faster way forward, but I don't think it's completely pointless to compare easy problem data across species.

I also think consciousness is a bit of an outdated term. We've already naively broken it into constituents. It's an umbrella term that includes the subject experience (qualia and the self) cognition, self-awareness, etc. Most of those are "easy" problems and progress is always being made with them. Really, the only aspect of consciousness that presents an epistemological challenge is the subjective experience: that matter can have feelings when it's arranged in the right way. What is that arrangement and how does it give rise to a self that experiences things, that's really the "holy grail" of inquiries into consciousness (thus, the hard problem). I suspect that the hard problem is intimately connected to the easy problem and that the more complete picture we have of the easy problem, the better we can formulate a solution to the hard problem. But there's lots of the easy problem left to solve currently (the easy problem is not easy!).
 
  • #73
DiracPool, I agree completely about the question of definition, but then the rest of your comment confuses me.

I understood that science has a fair handle on how the brain and nervous system functions in terms of physical structure and operation, and that this is the same for all creatures with a nervous system and brain. I'd ask then if something unique to the mechanisms of function has been found in human brains. Certainly complexity or arrangement or functional expression might differ by degree, but at the end of the day don't we only see the same fundamentals at work?

For myself, I would be quite open to believing that phenomenologically the experience of a human being must be shared by most mammals and birds at least. I'm not at all sure I'd accept that the human brain has something else going on that makes for those qualities listed above to have some capacity for expression in humans that is absent in other animals. I'd suggest that list really speaks to a level of intellectual capacity rather than one of experience.

If I can see red, or can feel anger or pain, or enjoy warmth, these are experiential properties that I think are subsumed within the meaning of consciousness. I'm reasonably confident other creatures can see red, be angry, feel pain or enjoy warmth.

If a creature can discern a red object and make a behavioural choice in regard to that discernment, on what basis should I consider that creature not to have experienced redness? Upon considering what it is for a human being to have these experiences we might assume that this curious property of subjective experience is some thing apart from the brain's material function, but isn't it rather anthropocentric to be unwilling to then assign that same experience to other beings when there is no evidence for some unique material property of a human brain?

That is, wouldn't it be more parsimonious to assume that in creatures with a brain and a nervous system that operate according to the same physical laws as a human brain, and about which we can predict behavioural responses to stimuli, subjective experience or consciousness is also present? Intellectual capacity may differ it is true, but consciousness as a fundamental property of a nervous system appears to me to be the simplest proposition.
 
  • #74
Graeme M said:
I'd ask then if something unique to the mechanisms of function has been found in human brains.

This begs the age-old question of is there something special about matter in biological form that yields the "spark of life" or elan vital of a living organism. Of course, for this discussion we can include sentient experience in that category. As far as we know, there is nothing magical or special about neurons that yields a special, non-physical sentient consciousness. It's all in the organization. There's nothing fundamentally unique about the human brain over other primate brains as far as it's general architecture and neurochemistry. However, there is a significant difference from other primates in terms of the proportions of it's gross structure. Specifically, that difference is the gross overdevelopment of the prefrontal cortex (PFC) and structures related to the PFC such as the lateral mediodorsal nucleus of the thamalus and post-Rolandic sensory association areas of the cortex that the PFC is directly connected with. These areas include the temporo-parietal junction (TPJ) you mentioned that Graziano discusses in his model. Although I haven't read the book you listed, the TPJ is a popular "convergenze zone" as Damasio calls them for speculation on the origins of higher cognitive functions in humans. It's not unique to Graziano's model. See: https://www.amazon.com/dp/0156010755/?tag=pfamazon01-20 However, the picture is much more complicated than simply localizing higher brain functions to certain brain regions or even small networks of regions.

The important point, though, is that if you look at the comparative neuroanatomy of primates, the human condition is not continuous with the development of pre-homo forms or even homo forms leading up to Homo erectus. The real bifurcation in brain development started with Homo erectus, and this is where to look for clues as to where "human uniqueness" came from.

Graeme M said:
I'm not at all sure I'd accept that the human brain has something else going on that makes for those qualities listed above to have some capacity for expression in humans that is absent in other animals.

This opinion is likely because you haven't studied comparative neuroanatomy and you think that conscious experience is simply a property of a network of interacting neurons principally, and only secondarily on the particular organization of those networks, which may yield more the "contents" of that sentient experience rather than the sentience itself. This is a common misconception (IMHO), and one I don't personally share.

Graeme M said:
For myself, I would be quite open to believing that phenomenologically the experience of a human being must be shared by most mammals and birds at least.

What do you base these beliefs on? Just a hunch? An opinion based on what you project is going on in the mind of a bird when she's hunting for worm? Obviously the human brain has "something else going on" than a bird or a mouse. We can organize expeditions to Mars and build stealth bombers. Birds can build a nest.

Graeme M said:
If a creature can discern a red object and make a behavioural choice in regard to that discernment, on what basis should I consider that creature not to have experienced redness?

There's the rub. It's not about whether the creature "experiences" redness, it's whether the creatures knows it is experiencing redness. That's the distinction I'm trying to draw. This in my opinion is what distinguishes human consciousness from whatever form of consciousness other animals may possess. Specifically, it is the ability for human to be aware of and reflect upon their consciousness (call it meta-perception, etc. if you will) and, most importantly, the ability of the human to give an introspective report of that sentient experience. This introspective report is the criterion that most psychologists and psychophysiologists have traditionally used and still use today to definitively qualify an introspective conscious experience. No animal other than humans to date have demonstrated this capacity. So the proof is in the pudding.

Again, this is why I said in the previous post that we are not going to get a handle on what the consciousness of nonhuman animals is like until we first understand specifically what processes in the human brain are associated with sentient experience and the ability to report that experience. Once we accomplish this, we can then compare those human brain processes to those of a target nonhuman species and see how they match up. At this point, I think we'll then have a better grasp of what's going on in that animal's head, not only as far as their cognitive capacity, but also as to what mental experiences that animal may be having. Until then, any discussion of "animal consciousness" is simply idle conjecture in my opinion. So to address the question in your thread title, "...--Meaningful?", I would answer "not so meaningful." :redface:
 
Last edited:
  • #75
madness said:
My understanding of Graziano's theory is that it proposes a mechanism which performs a function. Any such theory is, by Chalmers' definition, a solution to an easy problem. I think you are correct that, if you make the extra step to say "all systems which implement this mechanism are conscious" (and I think also some other statement such as "no systems which do not implement this mechanism are conscious") then you will have a theory which addresses the hard problem. Do you think these statements are reasonable for a proponent of Graziano's theory to make?

I would guess so too. I think Pythagorean's comments are along the same lines.

madness said:
It depends. I've recently seen people talk of the "pretty hard problem", which is to provide a theory of exactly which physical systems are conscious, by how much, and what kind of experiences they will have, but without explaining why. I'm not sure we can ever achieve a solution to the real hard problem, because there is a similar "hard problem" with any scientific theory. Chalmers' and Tononi's proposed solutions seem to attack the pretty hard problem rather than the hard problem. Noam Chomsky actually does at great job of explaining this point - .


But Tononi's proposal also does not address the "pretty hard problem", does it? It doesn't address the point you mention about "what kind of experiences they will have".
 
  • #76
atyy said:
I would guess so too. I think Pythagorean's comments are along the same lines.

Thinking again, it's a little more complicated. If Graziano subscribed to functionalism, then it should be any system which implements his proposed function. Presumably he would take a functionalist approach, but that leads to problems like the china brain.
atyy said:
But Tononi's proposal also does not address the "pretty hard problem", does it? It doesn't address the point you mention about "what kind of experiences they will have".

In addition to phi, there is the "qualia space (Q)" http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000462. This is intended to address the kind of experiences they will have.
 
  • #77
DiracPool said:
What do you base these beliefs on? Just a hunch? An opinion based on what you project is going on in the mind of a bird when she's hunting for worm? Obviously the human brain has "something else going on" than a bird or a mouse. We can organize expeditions to Mars and build stealth bombers. Birds can build a nest.

I suppose that I really don't know - perhaps as you say just a hunch. I think it comes back somewhat to what I mean by consciousness. I simply meant that if evolution has provided certain mechanisms by which brains and nervous systems can sense the world, I'd have thought it is most likely that this is done in much the same way for most animals. I don't know anything at all about comparative neuroanatomy, but does say the visual system of a rat work much the same way as that of a human?

If the general structures and processes are the same, then a rat should be aware of red, or hot, or whatever. Which is what I mean by consciousness. Humans may have more going on, but I don't see why that adds something extraordinary to the mix. To be aware that you are aware of red is really just a wrinkle on an established function, isn't it? An evolutionary improvement to enable more adaptive behaviours.

Susan Pockett wrote:
"We know we are conscious. Other humans look and act more or less like us, so when they tell us they have a particular conscious experience, we give them the benefit of the doubt. But what about a bit of hardware? Even a novice software writer could produce a piece of code that typed "I feel hot" whenever a thermostat registered a high temperature, but not many people would believe the appearance of this message meant the thermostat was experiencing hotness."

I'm not sure I'd agree that this application that reports it is hot is conscious, but then in essence isn't that all a human brain does in the same context? It senses heat, pulls together existing internal information and matches that with the new sensory data and generates a report such as "I feel hot". I am not sure I see anything much more happening there. To know that it is me that feels hot and that I have experienced this hotness seems to add little in a base process sense. What really helps is language (to share such information and build knowledge), capacity for abstract thinking (not a property of consciousness per se I think?) and functional hands (to apply knowledge in adapting the environment).

Regarding the bird and its worm, I think that broadly speaking the same things happen in her mind as happens in mine when I look for something. I perceive the world, I have a representation of what I am looking for and I match incoming data with that representation and when there is a match I grab it. My brain gives me a more useful set of routines to apply in how I go about my search, but isn't it really the same thing going on? Evolution has just given me a greater degree of functionality from the same basic toolkit.

Graziano suggests that this is exactly what evolution has done when it comes to our sense of experience. Although I only dimly grasp his idea, I think he is saying that what we think of as experience is not an actual experiential property at all. The Attention Schema is a model of the brain's process of attention - it makes us think that we are beings with awareness and experience but we aren't. We are neurons and connections and processes. We are conscious of things, or aware of things, just like other animals. But what our brains do (which may indeed be that uniqueness of the human condition) is to propose to us that we are actually aware of ourselves being aware of things.

As Graziano says, "With the evolution of this attention schema, brains have an ability to attribute to themselves not only "this object is green" or "I am a living being", but also, "I have a subjective experience of those items". (Speculations on the evolution of awareness. Journal of Cognitive Neuroscience, 2014. http://www.princeton.edu/~graziano/Graziano_JCN_2014.pdf ).

If he is right, his theory explains what it is to have a human conscious experience that seems so fundamentally subjective and why that happens. However, I think what others here have said is true - this theory wouldn't explain the hard problem. Because it doesn't tell us how it can be, for example, that I actually see the colour red in my mind or I see a representation of the external world in my mind. It tells us how it is we are aware of the awareness, but not how the awareness arises in the first place.

I do think though that I share the experiences, the awareness itself, with other mammals. I guess from what you've said you disagree with me on that. Could you explain why?
 
Last edited:
  • #78
Graeme M said:
I do think though that I share the experiences, the awareness itself, with other mammals. I guess from what you've said you disagree with me on that. Could you explain why?

I think I explained why pretty clearly in my previous post #74-- The reason is, fundamentally, that non-human animals do not give an "introspective report" of such internal mental experiences, so how can we be sure that they are, indeed, having such experiences in the same way that we humans do? And don't get caught in the trap of thinking that they don't give such introspective reports because of their lack of a voicebox or opposable thumb on their hand. That has nothing to do with it. We would able to detect such reports if they were there. Devising sophisticated techniques to look for such reports is what primatologists and "animal consciousness" researchers do for a living. Also. look at Steven Hawking, all he can do these days is twitch a cheek muscle and he can carry on a black hole war with Lenny Susskind. So I think it's clear that the inability of non-human animals to give an introspective report of their internal experiences is not due to anybody structure limitations, it due to the fact that they are not attempting to communicate such reports.

Graeme M said:
To be aware that you are aware of red is really just a wrinkle on an established function, isn't it?

I think it's a bit more than that.

In any case, I'll take a look at the Graziano reference you posted later on today and perhaps give a separate response to that in relation to the other comments in your post.
 
  • #79
The 'definition' of consciousness is generally broken into 2 parts, such as per Chalmers. The first is "psychological consciousness" and the second is "phenomenal consciousness". In simple terms, psychological consciousness is the easy problem because phenomena such as how things (ex: neurons) function or interact are objectively observable. Phenomenal consciousness is the hard problem because phenomena such as our subjective experience of red or pain or any other feeling are not objectively observable. We can have an animal that can distinguish certain wavelengths of light such as red, and therefore use that ability to perform a function, and that's easy because we can, in principal, observe how neurons interact to create that function. But why an animal or human should have some subjective experience of red at all is the hard part. Why red should have some subjective quality as it does as opposed to another color or as opposed to some other subjective experience altogether is what needs to be explained if one claims they are explaining the hard problem.
 
  • #80
madness said:
Thinking again, it's a little more complicated. If Graziano subscribed to functionalism, then it should be any system which implements his proposed function. Presumably he would take a functionalist approach, but that leads to problems like the china brain.

Is functionalism incompatible with Tonini's theory? I don't see why one couldn't have a China brain configured to have a certain amount of phi.

madness said:
In addition to phi, there is the "qualia space (Q)" http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000462. This is intended to address the kind of experiences they will have.

Here he's really talking about the report of a subjective experience. He can't say whether the subjective experiences corresponding to the same report are really the same subjective experience.
 
  • #81
atyy said:
Is functionalism incompatible with Tonini's theory? I don't see why one couldn't have a China brain configured to have a certain amount of phi.

It is incompatible:

http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003588
"there can be true “zombies” – unconscious feed-forward systems that are functionally equivalent to conscious complexes"

Which is the basis of a major criticism in this paper:

http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004286
"Since IIT is not a form of computational functionalism, it is vulnerable to fading/dancing qualia arguments"

I'm not sure if the china brain specificaly could have high phi or not.

atyy said:
Here he's really talking about the report of a subjective experience. He can't say whether the subjective experiences corresponding to the same report are really the same subjective experience.

Why do you think that he is talking about reports? It seems clear to me that he is talking about experiences rather than reports. For example, why would a behavioural report correspond to a geometrical shape in an information space? That appears unreasonable to me.
 
  • #82
madness said:
It is incompatible:

http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003588
"there can be true “zombies” – unconscious feed-forward systems that are functionally equivalent to conscious complexes"

Which is the basis of a major criticism in this paper:

http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004286
"Since IIT is not a form of computational functionalism, it is vulnerable to fading/dancing qualia arguments"

I'm not sure if the china brain specificaly could have high phi or not.

I agree with your statements as Tononi's reading of his own theory. However, I am unsure if Tononi has interpreted his theory correctly. On the other hand, since I'm skeptical that his theory solves the hard problem, or even the pretty hard problem, it would be like being skeptical of the wrong interpretation of the wrong theory.

madness said:
Why do you think that he is talking about reports? It seems clear to me that he is talking about experiences rather than reports. For example, why would a behavioural report correspond to a geometrical shape in an information space? That appears unreasonable to me.

Can the theory be falsified? My thinking was that if it can be falsified, then it must be falsified by reports, and if it is about experiences then it cannot be falsified. For example, he begins by saying "By contrast, the cerebellum - a part of our brain as complicated and even richer in neurons than the cortex – does not seem to generate much experience at all: if the cerebellum has to be removed surgically, consciousness is hardly affected. What is special about the corticothalamic system, then, that is not shared by the cerebellum?". How can we know that the quality of consciousness is hardly affected when the cerebellum is removed? Is that a testable statement? Or does he only mean that reports of the quality of consciousness are hardly affected? If the two cannot be distinguished, then it seems that all that is really been addressed is the easy problem.
 
  • #83
atyy said:
"By contrast, the cerebellum - a part of our brain as complicated and even richer in neurons than the cortex – does not seem to generate much experience at all: if the cerebellum has to be removed surgically, consciousness is hardly affected. What is special about the corticothalamic system, then, that is not shared by the cerebellum?". How can we know that the quality of consciousness is hardly affected when the cerebellum is removed? Is that a testable statement? Or does he only mean that reports of the quality of consciousness are hardly affected? If the two cannot be distinguished, then it seems that all that is really been addressed is the easy problem.

There's no guarantee that the cerebellum doesn't possesses it's own kind of rudimentary consciousness or that the conscious experience we have as individuals is a conglomerate of many such systems.

But... I still think we're touching on the hard problem if we find consistencies in reporting - we're not guaranteed of the result, but neither or we for theories of gravity or electrodynamics. In the end, they're all just models abstracted to terms of human thinking. And to that end, I don't think the hard problem is much different than any other problem in physics - we simply can't know whether our models are correct in the way we conceptually view them, we can only observe when our models "work" (successfully make valid, consistent predictions).
 
  • #84
atyy said:
I agree with your statements as Tononi's reading of his own theory. However, I am unsure if Tononi has interpreted his theory correctly. On the other hand, since I'm skeptical that his theory solves the hard problem, or even the pretty hard problem, it would be like being skeptical of the wrong interpretation of the wrong theory.

I think it's more a prediction than an interpretation. Basically, for a system with a given input-output relation, the value of Phi can be very different depending on what goes on in the middle. If Phi is the conscious level, and a "function" is an input-output relationship, then IIT is not a functionalist theory.

atyy said:
Can the theory be falsified? My thinking was that if it can be falsified, then it must be falsified by reports, and if it is about experiences then it cannot be falsified. For example, he begins by saying "By contrast, the cerebellum - a part of our brain as complicated and even richer in neurons than the cortex – does not seem to generate much experience at all: if the cerebellum has to be removed surgically, consciousness is hardly affected. What is special about the corticothalamic system, then, that is not shared by the cerebellum?". How can we know that the quality of consciousness is hardly affected when the cerebellum is removed? Is that a testable statement? Or does he only mean that reports of the quality of consciousness are hardly affected? If the two cannot be distinguished, then it seems that all that is really been addressed is the easy problem.

The theory is testable only insofar as we can rely on behioural reports. I'm not sure that this is equivalent to saying this it is a theory of behavioural reports, however.
 
  • #85
madness said:
I think it's more a prediction than an interpretation. Basically, for a system with a given input-output relation, the value of Phi can be very different depending on what goes on in the middle. If Phi is the conscious level, and a "function" is an input-output relationship, then IIT is not a functionalist theory.

Yes, if by functionalism one means "input-output relation", but I wasn't sure exactly how strictly the term was being used, and whether it included the china-brain.

Anyway now that I think I understand the terminology a bit better, I do agree that phi characterizes an equivalence class of dynamical systems in a way that the "internal structure" matters. I believe that some china brains can be configured to have high phi. I also do not believe Graziano's theory is pure functionalism, since it is a theory of internal structure.

madness said:
The theory is testable only insofar as we can rely on behioural reports. I'm not sure that this is equivalent to saying this it is a theory of behavioural reports, however.

Yes, it may not be a theory of behavioral reports. But to solve the hard problem or the harder aspects of the pretty hard problem, that uncertainty should be removed.
 
  • #86
atyy said:
Yes, if by functionalism one means "input-output relation", but I wasn't sure exactly how strictly the term was being used, and whether it included the china-brain.

Anyway now that I think I understand the terminology a bit better, I do agree that phi characterizes an equivalence class of dynamical systems in a way that the "internal structure" matters. I believe that some china brains can be configured to have high phi. I also do not believe Graziano's theory is pure functionalism, since it is a theory of internal structure.

My point was not that functionalism is a problem for Graziano, but that he does not solve the hard problem unless you add some extra assumptions, such as "functionalism is true". For example, in Graziano's theory, is a china brain version of his system conscious? What about a feedforward equivalent system which implements the same function? I don't think Graziano's theory, on its own, can answer these questions, meaning that it does not solve the hard problem (or even the pretty hard problem).

At best, Graziano gives an explanation of why humans are conscious. But even that I disagree with, because it is really a theory of why humans would report conscious experience. It might even be a form of eliminative materialism (or to put it another way, it is consistent with eliminative materialsm, but also consistent with almost any other philosophy of mind).

In my opinion, Graziano's theory says nothing interesting about consciousness. An explanation of behavioural reports has never been a deep and interesting question. The only way to get anything more out of his theory is to add something like "functionalism", "eliminative materialsm" or some other well established philosophy.

There is only one case in which I can see Graziano's theory as providing an attempt at the hard problem. If we take Graziano's theory, take functionalism, and take the view that a system is conscious if and only if it implements Graziano's proposed function, then we can determine whether an arbitrary physical system is conscious. For any other interpretation I think his theory would fall short of the mark.
 
Last edited:
  • #87
madness said:
My point was not that functionalism is a problem for Graziano, but that he does not solve the hard problem unless you add some extra assumptions, such as "functionalism is true". For example, in Graziano's theory, is a china brain version of his system conscious? What about a feedforward equivalent system which implements the same function? I don't think Graziano's theory, on its own, can answer these questions, meaning that it does not solve the hard problem (or even the pretty hard problem).

At best, Graziano gives an explanation of why humans are conscious. But even that I disagree with, because it is really a theory of why humans would report conscious experience. It might even be a form of eliminative materialism (or to put it another way, it is consistent with eliminative materialsm, but also consistent with almost any other philosophy of mind).

In my opinion, Graziano's theory says nothing interesting about consciousness. An explanation of behavioural reports has never been a deep and interesting question. The only way to get anything more out of his theory is to add something like "functionalism", "eliminative materialsm" or some other well established philosophy.

There is only one case in which I can see Graziano's theory as providing an attempt at the hard problem. If we take Graziano's theory, take functionalism, and take the view that a system is conscious if and only if it implements Graziano's proposed function, then we can determine whether an arbitrary physical system is conscious. For any other interpretation I think his theory would fall short of the mark.

Yes, more or less (maybe less, but that's not the point) agree with all that. My main puzzlement is why you and Pythagorean think Tononi comes any closer to overcoming these problems. But it seems we have at least some agreement that Tononi does not address the hard problem, only the pretty hard problem at best. Also, I think we agree that while Tononi's qualia space may be more than a theory of reports, it is not clear that it is not just a theory of reports.
 
  • #88
atyy said:
My main puzzlement is why you and Pythagorean think Tononi comes any closer to overcoming these problems.

In my case, it's the way Tononi frames the question, not so much the way he tries to answer it. He's framing it in terms of quantifiable physical events, whereas Graziano's explanation is (or appears to be) more conceptual and qualitative.
 
  • #89
atyy said:
Yes, more or less (maybe less, but that's not the point) agree with all that. My main puzzlement is why you and Pythagorean think Tononi comes any closer to overcoming these problems.

My reasons are similar to Pythagorean's. If Tononi's theory were correct, it would solve the (pretty) hard problem. Whether or not Graziano's theory is correct has no bearing on the hard problem.

atyy said:
But it seems we have at least some agreement that Tononi does not address the hard problem, only the pretty hard problem at best.

I could make a similar claim about Newton's theory of gravity, or Einstein's general relativity. Solving the "pretty hard problem" is in my opinion the main goal of a scientific theory of consciousness.

atyy said:
Also, I think we agree that while Tononi's qualia space may be more than a theory of reports, it is not clear that it is not just a theory of reports.

To me it is clear that it's not a theory of reports at all. It's a bit like saying quantum mechanics is a theory of measurements and has nothing to do with subatomic particles.
 
  • #90
madness said:
To me it is clear that it's not a theory of reports at all. It's a bit like saying quantum mechanics is a theory of measurements and has nothing to do with subatomic particles.

Yes, quantum mechanics (in the orthodox interpretation) is a theory of measurements and has nothing to do with subatomic particles :oldwink:

To me, the most interesting bits of physics are questions of interpretation, eg. how can we make quantum mechanics into a a theory of reality? how can we make sense of renormalization in quantum field theory? how can we understand why some physical systems are conscious? The outlining of possible answers to the first two questions were conceptual breakthroughs (by Bohm and Wilson respectively), and I expect the last also needs one.
 

Similar threads

Replies
5
Views
3K
  • · Replies 18 ·
Replies
18
Views
588
Replies
31
Views
7K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 21 ·
Replies
21
Views
6K
  • · Replies 135 ·
5
Replies
135
Views
23K
  • · Replies 71 ·
3
Replies
71
Views
16K
Replies
1
Views
3K
Replies
54
Views
12K
  • · Replies 21 ·
Replies
21
Views
5K