Graeme M
- 337
- 31
I've been busy these past few days and haven't had a chance to properly follow the discussion. On reading it through, I still feel I don't understand some basic foundations to the idea of a 'hard' problem for consciousness.
Much commentary and discussion here and elsewhere seems to me to address things such as self-awareness or language, or perhaps more exactly cognitive function, rather than consciousness. My take on the hard problem is best summarised by Pythagorean's statement:
"Really, the only aspect of consciousness that presents an epistemological challenge is the subjective experience: that matter can have feelings when it's arranged in the right way."
My idea of the hard problem is simply that there is an experience of awareness. Why should a brain that is just doing physical stuff have an experience of, for example, an external world? The external world, whatever that is, appears in my mind as "out there". My brain has an inner representation of the external world but the really curious thing is that it feels like it is out there and it is the world I am part of. That is, my interactions with this mental representation fairly accurately resemble my interactions with external objects.
Consciousness itself seems pretty straightforward, relatively speaking. That is, it seems to me to be the facility to represent the external world within a system such that the system can interact with the external world via that representation. That explains some of my earlier comments - it seems to me that any organism which can have some kind of representation of the external world and respond behaviourally to that is therefore conscious.
So, if I sense the world and react to it, I am conscious. That would be my starting point. All the extra bits that DiracPool describes are remarkable features and represent an evolving complexity to biological consciousness, but as I suggested earlier, why does that make for something extraordinary? In a biological sense, doesn't it just boil down to responding behaviourally to a representation of the world?
To me then an 'easy' problem is explaining how this representation arises mechanically, a 'hard' problem is explaining why it feels to me that I am experiencing the world.
Tononi's idea, as much as I can understand it, sounds good. But it's just a quantification model. That is, a system is conscious with a high enough phi. And it has experience if the shape in Q space is significant enough. But that doesn't address the hard problem, if the hard problem is as defined by Pythagorean. It would be useful, if it worked, to be able to predict a conscious experience within another organism. But just because the Q space shape is equivalent between an experience of mine and an experience of a blind burrowing mole only tells me that the mole is functionally conscious. It still offers no explanatory value for how I and the mole can actually come to have some feelings about the world.
I think a similar problem besets Graziano's theory, however I admit to not being quite sure what he means. If there is a Attention Schema model that informs us of an abstracted model of attention and this is what gives rise to our qualia of experience, there is still the problem of why it is that we have that experience.
Or so it seems to me. What am I missing here?
Much commentary and discussion here and elsewhere seems to me to address things such as self-awareness or language, or perhaps more exactly cognitive function, rather than consciousness. My take on the hard problem is best summarised by Pythagorean's statement:
"Really, the only aspect of consciousness that presents an epistemological challenge is the subjective experience: that matter can have feelings when it's arranged in the right way."
My idea of the hard problem is simply that there is an experience of awareness. Why should a brain that is just doing physical stuff have an experience of, for example, an external world? The external world, whatever that is, appears in my mind as "out there". My brain has an inner representation of the external world but the really curious thing is that it feels like it is out there and it is the world I am part of. That is, my interactions with this mental representation fairly accurately resemble my interactions with external objects.
Consciousness itself seems pretty straightforward, relatively speaking. That is, it seems to me to be the facility to represent the external world within a system such that the system can interact with the external world via that representation. That explains some of my earlier comments - it seems to me that any organism which can have some kind of representation of the external world and respond behaviourally to that is therefore conscious.
So, if I sense the world and react to it, I am conscious. That would be my starting point. All the extra bits that DiracPool describes are remarkable features and represent an evolving complexity to biological consciousness, but as I suggested earlier, why does that make for something extraordinary? In a biological sense, doesn't it just boil down to responding behaviourally to a representation of the world?
To me then an 'easy' problem is explaining how this representation arises mechanically, a 'hard' problem is explaining why it feels to me that I am experiencing the world.
Tononi's idea, as much as I can understand it, sounds good. But it's just a quantification model. That is, a system is conscious with a high enough phi. And it has experience if the shape in Q space is significant enough. But that doesn't address the hard problem, if the hard problem is as defined by Pythagorean. It would be useful, if it worked, to be able to predict a conscious experience within another organism. But just because the Q space shape is equivalent between an experience of mine and an experience of a blind burrowing mole only tells me that the mole is functionally conscious. It still offers no explanatory value for how I and the mole can actually come to have some feelings about the world.
I think a similar problem besets Graziano's theory, however I admit to not being quite sure what he means. If there is a Attention Schema model that informs us of an abstracted model of attention and this is what gives rise to our qualia of experience, there is still the problem of why it is that we have that experience.
Or so it seems to me. What am I missing here?