What or where is our real sense of self?

  • Thread starter Thread starter Metarepresent
  • Start date Start date
  • Tags Tags
    Self
AI Thread Summary
The discussion centers on the complex and unresolved nature of the "self" and "consciousness" within philosophy and cognitive neuroscience. Key questions include whether consciousness arises solely from neural activity, the subjective nature of consciousness, and how meaning is encoded by neurons. Participants explore various theories, such as self-representational theories, which suggest that higher-order representations play a crucial role in the emergence of subjectivity. The conversation also distinguishes between phenomenal consciousness (subjective experience) and psychological consciousness (objective behavior), emphasizing the need for a clear understanding of these concepts to advance the discourse on consciousness. The role of language and social evolution in shaping self-awareness is highlighted, suggesting that the sense of self is deeply tied to communication and cultural context. Overall, the dialogue reflects a blend of scientific inquiry and philosophical exploration, acknowledging the challenges in reconciling subjective experiences with objective measurements of consciousness.
Metarepresent
Messages
3
Reaction score
0
No one in philosophy or cognitive neuroscience has come to a general consensus about the “self”. Questions about the “self” and “consciousness” have been pestering me awhile. I believe I need more knowledge before I can truly adopt a position; hence the purpose of this message being the acquisition of knowledge. I am currently studying a Cognitive Neuroscience Textbook with a friend (he and I are both undergrad Biology majors), and we have formed a blog documenting our progress. Granted, I realize this is a question needs aid from all fields, especially philosophy.

First, let’s discuss the essential questions that must be answered in order to formulate a cohesive theory of self:

Does consciousness emerge from neural activity alone? Why is there always someone having the experience? Who is the feeler of your feelings and the dreamer of your dreams? Who is the agent doing the doing, and what is the entity thinking your thoughts? Why is your conscious reality your conscious reality? Why is consciousness subjective? Why does our perceived reality almost invariably have a center: an experiencing self? How exactly, then, does subjectivity, this “I”, emerge? Is the self an operation rather than a thing or repository? How to comprehend subjectivity is the deepest puzzle in consciousness research. The most important of all questions is how do neurons encode meaning and evoke all the semantic associations of an object?

Also, before continuing this long diatribe: what are the best phenomenal characteristics we may attribute to consciousness? I believe unity, recursive processing style and egocentric perspective are the best phenomenal target properties attributed to the ‘self’. Furthermore, there are still issues all theories of consciousness must address. These include, but are not limited to binding (i.g., how the property of coherence arises in consciousness - how are the processing-domains in the brain distributed to allow this? Wolf Singer claims that the synchronization of oscillatory activity may be the mechanism for the binding of distributed brain processes), QUALIA, Cartesian Theatre (i.g., how can we create a theory of consciousness without falling into Dennett’s dualism trap), and etc.

For some reason, the self-representational theories of subjectivity appeals to me. I believe higher-order representations have a huge role in the origin of subjectivity (as proposed as Ramachandran and others). I am VERY interested in what you have to say regarding this:

"Very early in evolution the brain developed the ability to create first-order sensory representation of external objects that could elicit only a very limited number of reactions. For example a rat's brain has only a first-order representation of a cat - specifically, as a furry, moving thing to avoid reflexively. But as the human brain evolved further, there emerged a second brain - a set of nerve connections, to be exact - that was in a sense parasitic on the old one. This second brain creates metarepresentations (representations of representations – a higher order of abstraction) by processing the information from the first brain into manageable chunks that can be used for a wider repertoire of more sophisticated responses, including language and symbolic thought. This is why, instead of just “the furry enemy” that it for the rat, the cat appears to you as a mammal, a predator, a pet, an enemy of dogs and rats, a thing that has ears, whiskers, a long tail, and a meow; it even reminds you of Halle Berry in a latex suit. It also has a name, “cat,” symbolizing the whole cloud of associations. In sort, the second brain imbues an object with meaning, creating a metarepresentation that allows you to be consciously aware of a cat in a way that the rat isn’t.

Metarepresentations are also a prerequisite for our values, beliefs, and priorities. For example, a first-order representation of disgust is a visceral “avoid it” reaction, while a metarepresentation would include, among other things, the social disgust you feel toward something you consider morally wrong or etically inappropriate. Such higher-order representations can be juggled around in your mind in a manner that is unique to humans. They are linked to our sense of self and enable us to find meaning in the outside world – both material and social – and allow us to define ourselves in relation to it. For example, I can say, “I find her attitude toward emptying the cat litter box disgusting.”
- The Tell-Tale Brain by VS Ramachandran page 247​

It is with the manipulation of meta-representations, according to Ramachandran, that we engage in human consciousness as we know it. Antonio Damasio claimed, “Our evolved type of conscious self-model is unique to the human brain, that by representing the process of representation itself, we can catch ourselves – as Antonio Damasio would call it – in the act of knowing”. I also read many other plausible self-representational theories of consciousness. For example, I find this one very convincing, due to how it places the notion of a recursive consciousness into an evolutionary paradigm:

http://www.wwwconsciousness.com/Consciousness_PDF.pdf
 
Last edited by a moderator:
Physics news on Phys.org
An entirely different school of thought (and the correct one :smile:) is that self-awareness and all the other human "higher mental abilities" are the result of language and cultural evolution. Words, as a new level of symbolic order and constraint, scaffold more elaborate forms of thinking.

Lev Vygotsky (the sociocultural school) and to a lesser extent GH Mead (symbolic interactionism) are the best sources here. The idea regularly gets rediscovered, as by philosopher Andy Clark.

So the meta-thinking is due to social evolution (call it memetic if you like) rather than biological (genetic). This is why neuroscience does not find "higher faculties" in the brain's architecture. Even though it has really, really, tried to.
 
If a single human somehow developed without any stimulus from the environment, it would seem common sense that it has no thought processes. no self.

What neuroscience can do is try to find the means by which the brain is able to engage in social activity and take advantage of it; how it stores the information, changes it, and contributes to the information ensemble that is society.

For instance, spatial metaphors are associated with spatial and parietal lobes as ways of "perceiving" things that aren't directly detectable by our senses. In the most abstract sense, we use visualization in science to transform measurable variables into perceivable spatial objects (plots or graphs). But we also use a lot of spatial metaphors to describe emotional situations and even the self ("within", "inside") as compared to the "outside" world.

Linguistic relativity definitely plays a role in this; societies (even nonhuman societies) have physically evolved to where they can organize a body of knowledge (stored in language) to augment the simpler signaling protein language of cells. I speculate that human language is possibly the most sophisticated of the animals.
 
Hi Metarepresent. Welcome to the board.
Metarepresent said:
No one in philosophy or cognitive neuroscience has come to a general consensus about the “self”.
You said a mouthful right there. You’ll get different opinions from everyone, and more often than not, people will talk right past each other, not knowing or not understanding what the other person is talking about, so getting a solid grounding in some of the concepts before we refer to them is important.

When we talk about consiousness, there are various phenomena that we might be referring to. One person might be talking about the objective aspects of consiousness (psychological consciousness) but others might be thinking about subjective aspects of conscoiusness also known as phenomenal consciousness. The two are very different though some folks can’t or won’t differentiate between the two.

In his book "The Conscious Mind", Chalmers talks about these 2 different concepts of mind, the phenomenal and the psychological. The phenomenal aspect of mind "is characterized by the way it feels ... ". In other words, the phenomenal qualities of mind include those phenomena that form our experiences or how experience feels. The psychological aspects of mind in comparison "is characterized by what it does." The psychological aspects of mind are those things that are objectively measurable such as behavior or the interactions of neurons.

Consider phenomenal consciousness to be a set of phenomena such as the experience of the color red, how sugar tastes, what pain feels like, what hot or cold feels like, what love and anger feel like, etc ... are all phenomenal qualities of mind. So when we refer to phenomenal consciousness, we're referring to that aspect of mind that is characterized by our subjective experiences.

In comparison, we can talk about psychological consciousness which again is a set of phenomena. Psychological consciousness includes that which is objectively measurable such as behavior. We might think phenomenal consciousness and psychological consciousness are one in the same, but they pick out separate and distinct phenomena. One is subjective, the other is objective.

Chalmers states, "It seems reasonable to say that together, the psychological and the phenomenal exhaust the mental. That is, every mental property is either a phenomenal property, a psychological property, or some combination of the two. Certainly, if we are concerned with those manifest properties of the mind that cry out for explanation, we find first, the varieties of conscious experience and second, the causation of behavior. There is no third kind of manifest explanandum, and the first two sources of evidence - experience and behavior - provide no reason to believe in any third kind of nonphenomenal, nonfunctional properties ..."

So if you’re referring to how it is we experience anything, you are referring to phenomenal consciousness. If you refer to how neurons interact or how our behavior is affected by something, you are referring to psychological consciousness. The difference between the two and why phenomenal consciouness should exist at all is often what is referred to as the explanatory gap. We can explain why neurons interact in a given way but why they should give rise to phenomenal consciousness isn’t explained by explaining how neurons interact.

Metarepresent said:
Does consciousness emerge from neural activity alone?
The standard paradim of mind is that consciousness is emergent from the interaction of neurons. That isn’t to say that everyone agrees this is true, but that is what is generally taken as true. It gives rise to a few very serious problems and paradoxes though, so some folks have looked to see where those logical inconsistancies have arisen and how one might reconsile them. So far, I don’t see any consensus on how to reconsile these very serious issues, so logically it would seem to me, the standard theory of consciousness doesn’t provide an adequate explanation.

Metarepresent said:
Why is there always someone having the experience? Who is the feeler of your feelings and the dreamer of your dreams? Who is the agent doing the doing, and what is the entity thinking your thoughts?
This line of reasoning requires a religious type of duality, as if there were some kind of “soul” that is separate and distinct from the brain, the neurons and the bits and pieces that make up a person. But there's no need to consider this soul. Consider only that there are various phenomena created by the brain, some of which are subjective and can't be objectively measured. That's enough to explain the sense of self. The sense of self is a phenomena created by the brain, a feeling that associates the phenomenal experience with the body that is having the experience. Note that the term “duality” in philosophy can have another, very different meaning however, which has nothing to do with religion. That’s a separate issue entirely.

Metarepresent said:
The most important of all questions is how do neurons encode meaning and evoke all the semantic associations of an object?
I’d mentioned earlier there are some very serious logical inconsistancies in the present theory of how phenomenal consciousness emerges from the interaction of neurons. This is one of them. Stevan Harnad calls this the “symbol grounding problem”. Basically, computational aspects of mind that require the emergence of consciousness from the interaction of neurons are considered to be “symbol manipulation systems” and as such, there is no grounding of the symbols to anything intrinsic to nature.

Regarding the reference to Ramachandran, one can ask whether or not his discussion of self-representation is a discussion about the psychological aspect of consciousness (ie: the objectively measurable one) or the phenomenal aspect. We can talk about how neurons interact, how additional neurons such as mirror neurons interact, and the resulting behavior of an animal or person, but it gets us no closer to closing the explanatory gap. Why should there be phenomenal experiences associated with any of those neuron interactions? Without explaining why these additional phenomena should arise, we haven't explained phenomenal consciousness.
 
Consider the idea of the philosophical zombie, which you probably have heard of. It seems to me that any objective theory of consciousness cannot distinguish between such zombies and human beings with subjective experience. I agree with Q-guest, it is an important distinction between the objective phenomenon of consciousness and the subjective quality to it, and it seems that the latter cannot be touched upon in any objective theory. One can imagine the existence of such zombies, and in my opinion you cannot infer that some particular individual is not such a being.
 
Q_Goest said:
In comparison, we can talk about psychological consciousness which again is a set of phenomena. Psychological consciousness includes that which is objectively measurable such as behavior. We might think phenomenal consciousness and psychological consciousness are one in the same, but they pick out separate and distinct phenomena. One is subjective, the other is objective.

Yes, and so one is also folk lore, the other is science. I vote for the science.

And what science can tell us, for example, is that introspection is learned and socially constructed. There is no biological faculty of "introspective awareness". It is a skill you have to learn. And what most people end up "seeing" is what cultural expectations lead them to see.

A classic example is dreams. Very few people learn to accurately introspect on the phenomenal nature of dreams.

The hard problem of consciousness really boils down to the fact that people find it too hard to study all the neurology, psychology, anthropology and systems science involved in being up to date with what is known.

It is so much easier to be a Chalmers and say all that junk is irrelevant, he has no need to learn it, because there will always be a phenomenological mystery. Lots of people love that line because it justifies a belief in soulstuff or QM coherence magic. It is the easy cop-out.

There is also a philosophy of science issue being trampled over here.

The point of scientific models is to provide objective descriptions based on "fundamentals" or universals. If you are modelling atoms or life, you are not trying to provide a subjective impression (ie: particular and local as opposed to general and global) of "what it is like" to be an atom, or a living creature.

So a clear divide between objective description and subjective experience is what we are aiming for rather than "a hard problem". We are trying to get as far away from a subjective stance as possible, so as to be maximally objective (see Nozick's Invariances for example).

Having established an "objective theory of mind" we should then of course be able to return to the subjective in some fashion.

It would be nice to be able to build "conscious" machines based on the principles discovered (for instance, Grossberg's ART neural nets or Friston's Bayesian systems). It would be nice to have a feeling of being able to understand why the many aspects of consciousness are what the are, and not otherwise (as for example why you can have blackish green, red and blue - forest green, scarlet, navy - but not blackish yellow).

But it takes a level of study that most people are not willing to undertake to appreciate exactly how much we do already know, and how far we might still have to go.
 
apeiron said:
Yes, and so one is also folk lore, the other is science. I vote for the science.

...

The hard problem of consciousness really boils down to the fact that people find it too hard to study all the neurology, psychology, anthropology and systems science involved in being up to date with what is known.

It is so much easier to be a Chalmers and say all that junk is irrelevant, he has no need to learn it, because there will always be a phenomenological mystery. Lots of people love that line because it justifies a belief in soulstuff or QM coherence magic. It is the easy cop-out.

There is also a philosophy of science issue being trampled over here.

...

But it takes a level of study that most people are not willing to undertake to appreciate exactly how much we do already know, and how far we might still have to go.
Ok, so… psychological consciousness is science and phenomenal consciousness is folklore? And you vote for science. And people like Chalmers say a lot of junk because he’s copping out? And the science issue is being trampled and the level of study that you have is magnificent and other folks just aren’t up to your standards. Is that about right? That’s quite the attitude you have.

Do you know what browbeat means?
brow•beat –verb (used with object), -beat, -beat•en, -beat•ing.
to intimidate by overbearing looks or words; bully: They browbeat him into agreeing
I’m sure you’re much smarter than me or anyone else here, but how about you cut the sh*t and stop acting like a bully on the school playground.

apeiron said:
But there seems less point jumping up and down in a philosophy sub-forum about people wanting to do too much philosophy for your tastes. We get that you can get by in your daily work without raising wider questions.
https://www.physicsforums.com/showthread.php?t=451342&+forum&page=4

apeiron said:
To me this is demonstrating the inflexibility of thought I am talking about. Me clever scientist, you dopey philosopher. Ugh. End of story.
https://www.physicsforums.com/showthread.php?t=302413&+forum&page=2

Ok, so wait… if you are the clever scientist and I’m the dopey philosopher, then why did you say that ZapperZ was the clever scientist and you were the dopey philosopher. And why is this philosophy forum too much philosophy for you when you were arguing that you wanted to do philosophy in the philosophy forum? I’m SOOOOOOO confused.

I’m sure you have a rational explanation for all this and you won’t browbeat people like you always do. So, why haven’t you joined in the discussion about improving the philosophy forum in the science advisors forum? I haven’t seen anything from you there. It’s like you don’t seem to care about the philosophy forum or where it’s going or the new rules or anything. You really should discuss this with us. It’s up at the top of the main page under “science advisors forum”.
 
aperion said:
It is the easy cop-out.

Cop out from what? The hard problem of philosophy doesn't boil down to a lack of knowledge of neurobiology. The opinions you are attacking aren't the same as the QM-mysticism new age soul-stuff out there. They are not objecting to, or "trampling on", the science behind neurology and how it relates to consciousness, and this is not being degraded for what it is on its domain. One is simply emphasizing the important distinction between this and what we call subjective experience, which again and again seems to be interpreted as a counter-argument against, or cop-out to, scientific endeavor. Emphasizing that, and drawing the line between the subjective nature of consciousness and what actually can be concluded from objective measurable results.
 
Last edited:
Q_Goest said:
Ok, so wait… if you are the clever scientist and I’m the dopey philosopher, then why did you say that ZapperZ was the clever scientist and you were the dopey philosopher. And why is this philosophy forum too much philosophy for you when you were arguing that you wanted to do philosophy in the philosophy forum? I’m SOOOOOOO confused.

But my position was explained quite clearly in that thread. Did you not read it? Here it is again...

To me this is demonstrating the inflexibility of thought I am talking about. Me clever scientist, you dopey philosopher. Ugh. End of story.

You are basing your whole point of view on the assumption that scientists and philosophers have ways of modelling that are fundamentally different. Yet I'm not hearing any interesting evidence as to the claimed nature of the difference.

I am putting forward the alternative view that modelling is modelling, and there are different levels of abstraction - it is a hierarchy of modelling. Down the bottom, you have science as technology, up the top, you have science as philosophy. And neither is better. Both have their uses.

And I've frequently made these points to explain my general position on "philosophy"...

1) Classical philosophy is instructive: it shows the origin of our metaphysical prejudices - why we believe what we believe. And those guys did a really good job first time round.

2) Modern academic philosophy is mostly shallow and a waste of time. If you want the best "philosophical" thinking, then I believe you find it within science - in particular, theoretical biology when it comes to questions of ethics, complexity, life, mind and meaning.

3) Unfortunately the level of philosophical thought within mind science - neuro and psych - is not sophisticated as in biology. Largely this is because neuro arises out of medical training and psych out of computational models. However give the field another 20 years and who knows?

4) I have a personal interest in the history of systems approaches within philosophy, so that means Anaximander, Aristotle, Hegel, Peirce, etc.

5) I've had personal dealings with Chalmers and others, so that informs my opinions.

Q_Goest said:
I’m sure you have a rational explanation for all this and you won’t browbeat people like you always do. So, why haven’t you joined in the discussion about improving the philosophy forum in the science advisors forum? I haven’t seen anything from you there. It’s like you don’t seem to care about the philosophy forum or where it’s going or the new rules or anything. You really should discuss this with us. It’s up at the top of the main page under “science advisors forum”.

I really can't see any link to such a forum. And I certainly was not invited to take part :smile:.

As to where the philosophy forum should go, it might be nice if it discussed physics a bit more. I wouldn't mind if topics like freewill and consciousness were banned because there just isn't the depth of basic scientific knowledge available to constrain the conversations.
 
  • #10
The left thinks you're right, the right thinks your left. Nobody likes a moderate.
 
  • #11
  • #12
Apeiron, the tone of your last post was much improved. I’ve already made the point that some folks can’t or won’t differentiate between psychological consciousness and phenomenal consciousness. I won’t hijack this thread to push my views and I’d ask you attempt to do the same.
 
  • #13
apeiron said:
An entirely different school of thought (and the correct one :smile:) is that self-awareness and all the other human "higher mental abilities" are the result of language and cultural evolution.
One thing I'd love you to discuss is why you're taking as granted that self-awarness is a higher mental ability.

One need to think that if one is to believe that your school of thought is the most likely to add interesting piece of evidence regarding awarness. Maybe if no one still managed to develop a robotic awarness that's because our present ideas are not enough to explain awarness. Especially, for your position the problem may well be that awarness is simply not explained by langage. My two cents :wink:
 
  • #14
apeiron said:
An entirely different school of thought (and the correct one :smile:) is that self-awareness and all the other human "higher mental abilities" are the result of language and cultural evolution. Words, as a new level of symbolic order and constraint, scaffold more elaborate forms of thinking.


I would agree that it’s useless to try to understand “consciousness” outside the context of language and human communication. The “sense of self” humans have is something we develop when we’re very young, by talking to ourselves. I don’t see any reason to think that someone who grew up having no contact with other humans would develop anything like the “sense of self” that we all take for granted.

Of course we’ll never know what such a person’s internal experience is like, if they can’t tell us about it. And they’ll never know anything about it themselves, if they can’t ask themselves about it.

Likewise it seems obvious to me that the “meta-representation” discussed by Ramachandran is built on language... although he writes as though language were just one of the many “more sophisticated responses” that our “second brain” somehow supports.

No doubt everything we do and experience is supported by our neural architecture. Ramachandran says, “as the human brain evolved further...” – but the thing is, once interpersonal communication begins to evolve, there’s a completely new, non-genetic channel for passing things on from one generation to the next... and a corresponding selective “pressure” to improve the communications channel and make it more robust. So our brains must have evolved through biological evolution to support this far more rapid kind of evolution through personal connection.

But whatever “higher” capacities our human brain has evolved, they can only function to the extent each of us learns to talk. We have a ”conscious self” to the extent we have a communicative relationship with ourselves, more or less like the relationships we have with others.
 
  • #15
First, to Lievo, possibly if you wish to better understand why apeiron (or anybody for that matter) "takes for granted" the idea that self-awareness is a higher mental function you could start with reading this paper:

http://www.marxists.org/archive/vygotsky/works/1925/consciousness.htm

I'm sure you will find that it elucidates various methodological problems with different approaches to psychology/ the problem of awareness and the beginning of the socially mediated approach to language and self-awareness.

Also, to Conrad DJ, just figured I would throw it out there out of fairness to Ramachandran that later on in the book, he says something quite similar to what you said "...evolution through personal connection" ...He speaks, speculatively, about portions of the brain responsible for the usage of tools in the environment and social-responses in apes having a random mutation, and what originally evolved for tool usage and primitive social response served as an exaptation to free us from more biologically determined responses over long time-scales and enabled us to transmit learned-practices horizontally through generations, and vertically on much smaller time scales.
That seems to me to be close to what you said.
 
  • #16
ConradDJ said:
I don’t see any reason to think that someone who grew up having no contact with other humans would develop anything like the “sense of self” that
we all take for granted.
Conrad, was it to answer my question? So you think that no animal has a sense of self, right?

JDStupi said:
I'm sure you will find that it elucidates various methodological problems with different approaches to psychology/ the problem of awareness and the beginning of the socially mediated approach to language and self-awareness.
JDStupi, again this is to describe Vygo' position, not to answer in any way. Specifically, in the link you pointed he took for granted that:

all animal behaviour consists of two groups of reactions: innate or unconditional reflexes and acquired or conditional reactions. (...) what is fundamentally new in human behaviour (...) Whereas animals passively adapt to the environment, man actively adapts the environment to himself.

I agree this was 'obvious' at his time -a time where surgeons were thinking that the babies could not fell pain. I just don't see how one can still believe that -neither for the babies, for that matter. Don't you now think it's quite obvious many animals behave on their own, have intents, feelings, and thus must have a sense of self?
 
  • #17
Metarepresent said:
For some reason, the self-representational theories of subjectivity appeals to me. I believe higher-order representations have a huge role in the origin of subjectivity (as proposed as Ramachandran and others). I am VERY interested in what you have to say regarding this:

"Very early in evolution the brain developed the ability to create first-order sensory representation of external objects that could elicit only a very limited number of reactions. For example a rat's brain has only a first-order representation of a cat - specifically, as a furry, moving thing to avoid reflexively.[...]​
What is a representation? It is a presentation (image, sound or otherwise)in the mind. So when there is a representation there is necessarily a consciousness.

So while i will agree that "the self" may be a representation and that it has changed over time to become more elaborate and complex, i do not think putting a representation at the basis of consciousness will help explain how consciousness can come from nonconscious matter that doesn't have the ability to represent anything. The bold bit in Ramachandrans text is the part that needs explaining.
 
  • #18
Lievo said:
One thing I'd love you to discuss is why you're taking as granted that self-awarness is a higher mental ability.

As per my original post, the reasons for holding this position are based on well established, if poorly publicised, research. Vygotsky is the single best source IMHO. But for anyone interested, I can supply a bookshelf of references.

As to an animal's sense of self, there is also of course a different level of sense of self that is a basic sense of emboddiment and awareness of body boundaries. Even an animal "knows" its tongue from its food, and so which bit to chew. The difference that language makes is the ability to think about such facts. An animal just "is" conscious. Humans can step back, scaffolded by words, to think about this then surprising fact.
 
  • #19
Lievo said:
Don't you now think it's quite obvious many animals behave on their own, have intents, feelings, and thus must have a sense of self?

This is the part of your post, I believe, that most represents the source of disagreement between you and others. The problem, in my mind, is that quite possibly what is happening is that an argument is developing over different topics. It seems as though some are speaking about a specifically human sense of self, the one we are aquainted with and experience, and are speaking about the necessary conditions for the existence of our sense of self, not neccessarily any sense of self.
The problem is that we are prone to get into semantical disagreements over what constitutes a "self" at this level of knowledge because we are arguing over the borders of the concept of a "self". On one hand, when we introspect or look at our sense of self we find that our "I" is built upon a vast network of memories, associative connections, generalized concepts, a re-flective (to bend back upon) ability, and embedding within a specific socio-cultural milieu. On the other hand we see that animals possesses "subjective" states such as pain, pleasure etc etc, which leads us to believe that animals too have an "I" The problem then is in what sense is the animal "I" and the human "I" the same in meaning or reference?
Another question that arises that points out different "self" conceptions is "Is awareness a sufficient condition for a sense of self?" and if not, what is?

So while i will agree that "the self" may be a representation and that it has changed over time to become more elaborate and complex, i do not think putting a representation at the basis of consciousness will help explain how consciousness can come from nonconscious matter that doesn't have the ability to represent anything. The bold bit in Ramachandrans text is the part that needs explaining

We must also be sure not to put words into Rama's mouth. I do not believe that Rama has any intent on saying that "All conscioussness arises from representation" for this would simply be explaining opium in terms of its soperific properties. He, I believe, is specifically concerned with the existence of the "higher" forms of human-consciousness. Of course, postulating that no animal has any sense of self what so ever and then humans had a discrete jump to a fully developed sense of self would render a "proper" explanation of awareness and consciousness impossible, because then it could only be explained ahistorically and not within a biological evolutionary context. That said, tracing the geneological development of "awareness", "self", and their interrelation through biological history is not a question for human psychology, and in order to answer such a question we must seek the roots much further down. In fact, the very existence of celullar communications in some sense has a "representational" character in that some specific process is taken to signal for some other regulation of the "behaviour" or homeostasis of the system. Anticipating something apeiron will most likely suggest, and he knows much more about this than I, this line of thought is similar to that of "biosemiotics" which is essentially what these boundary-straddling discussions of "self" and "consciousness" need.
The unanswered question is "At what point do semantical operations arise out of purely physical?" this question will be closely related to the question of the origin of the encoding of information in a DNA molecule (or some closely related antecedent molecule). It is difficult to think about the origins of such processes because we are prone to say "How did 'it' 'know' to encode information" which is a form of homunculus fallacy and circumventing this explanatory gap with a naturalistc explanation will be difficult.
 
  • #20
Jarle said:
Cop out from what? The hard problem of philosophy doesn't boil down to a lack of knowledge of neurobiology. The opinions you are attacking aren't the same as the QM-mysticism new age soul-stuff out there.

Well, in practice the hard problem is invoked in consciousness studies as justification for leaping to radical mechanisms because it is "obvious" that neurological detail cannot cut it. Chalmers, for example, floated the idea that awareness might be another fundamental property of matter like charge or mass. Panpsychism. And his arguments were used by QM mysterians like Hameroff to legitimate that whole school of crackpottery. So in the social history of this area, the hard problem argument has in fact been incredibly damaging in my view.

The hard problem exists because it is plain that simple reductionist modelling of the brain cannot give you a satisfactory theory of consciousness. There is indeed a fundamental problem because if you try to build from the neural circuits up, you end up having to say "and at the end of it all, consciousness emerges". There is no proper causal account. You just have this global property that is the epiphenomenal smoke above the factory.

So what's the correct next step? You shift from simple models of causality to complex ones. You learn about the systems view - the kind of thing biologists know all about because they have already been through the loop when it comes to "life" - that mysterious emergent property which once seemed a hard problem requiring some soulstuff, some vitalistic essence, to explain it fully.

Once you understand the nature of global constraints and downward causality, the hard problem evaporates IMHO. There is no such thing as an epiphenomenal global property in this view - a global state that can exist, yet serve no function. You logically now cannot have such a detached thing.

Of course there is still the objective description vs the subjective impression distinction. But that applies to all modelling because that is what modelling is about - objectifying our impressions.
 
  • #21
pftest said:
What is a representation? It is a presentation (image, sound or otherwise)in the mind. So when there is a representation there is necessarily a consciousness.

So while i will agree that "the self" may be a representation and that it has changed over time to become more elaborate and complex, i do not think putting a representation at the basis of consciousness will help explain how consciousness can come from nonconscious matter that doesn't have the ability to represent anything. The bold bit in Ramachandrans text is the part that needs explaining.

From my cognitive neuroscience textbook, my friend:

"Cognitive and neural systems are sometimes said to create representations of the world. Representations need not only concern physical properties of the world (e.g. sounds, colors) but may also relate to more abstract forms of knowledge (e.g. knowledge of the beliefs of other people, factual knowledge).

Cognitive psychologists may refer to a mental representation of, say, your grandmother, being accused in an information-processing model of face processing. However, it is important to distinguish this from its neural representation. There is unlikely to be a one-to-one relationship between a hypothetic mental representation and the response properties of single neurons. The outside world is not copied inside the head, neither literally nor metaphorically; rather, the response properties of neurons (and brain regions) correlate with certain real-world features. As such, the relationship between a mental representation and a neural one is unlikely to be straightforward. The electrophysiological method of single-cell recordings" - page 33 of The Student's Guide to Cognitive Neuroscience 2nd edition by Jamie Ward​

Lots of research has been conducted into the nature of representations. It is taken as axiomatic that representations exist. Proponents of embodied cognition (http://en.wikipedia.org/wiki/Embodied_cognition) would disagree, but I really, really dislike their views. Here is some more about the nature of representations from my textbook:

Rolls and Deco (2002) distinguish between three different types of representation that may be found at the neural level:

1. Local representation. All the information about a stimulus/event is carried in one of the neurons (as in a grandmother cell).
2. Fully distributed representation. All the information about a stimulus/event is carried in all neurons of a given population.
3. Sparse distributed representation. A distributed representation in which a small proportion of the neurons carry information about a stimulus/event. - page 35 of The Student's Guide to Cognitive 2nd edition by Jamie Ward​

What VS Ramachandron is claiming is that the manipulation of higher-order representation underlies our consciousness. He refers to this as metarepresentation. Here is some more information regarding his position:

V.S. Ramachandran hypothesizes that a new set of brain structures evolved in the course of hominization in order to transform the outputs of the primary sensory areas into what he calls “metarepresentations”. In other words, instead of producing simple sensory representations, the brain began to create “representations of representations” that ultimately made symbolic thought possible, and this enhanced form made sensory information easier to manipulate, in particular for purposes of language.

One of the brain structures involved in creating these metarepresentations would be the inferior parietal lobe, which is one of the youngest regions in the brain, in terms of evolution. In humans, this lobe is divided into the angular gyrus and the supramarginal gyrus, and both of these structures are fairly large. Just beside them is Wernicke’s area, which is unique to human beings and is associated with understanding of language.

According to Ramachandran, the interaction among Wernicke’s area, the inferior parietal lobe (especially in the right hemisphere), and the anterior cingulate cortex is fundamental for generating metarepresentations from sensory representations, thus giving rise to qualia and to the sense of a “self” that experiences these qualia.
http://thebrain.mcgill.ca/flash/a/a_12/a_12_cr/a_12_cr_con/a_12_cr_con.html
 
  • #22
Metarepresent said:
Lots of research has been conducted into the nature of representations. It is taken as axiomatic that representations exist. Proponents of embodied cognition (http://en.wikipedia.org/wiki/Embodied_cognition) would disagree, but I really, really dislike their views. Here is some more about the nature of representations from my textbook:
I don't deny that representations exist, but i do not think they exist outside of consciousness. Physically, neurons (or other physical objects) just consist of atoms (and the elementary particles that they consist of) and those in themselves do not represent something, unless it is in the context of a mind utilising them, as with those 3 types of representations you quote. The neurons or single cells are located within the brain and so they are spoken of as if they supply information to it and thus the mind.

What VS Ramachandron is claiming is that the manipulation of higher-order representation underlies our consciousness. He refers to this as metarepresentation. Here is some more information regarding his position:

V.S. Ramachandran hypothesizes that a new set of brain structures evolved in the course of hominization in order to transform the outputs of the primary sensory areas into what he calls “metarepresentations”. In other words, instead of producing simple sensory representations, the brain began to create “representations of representations” that ultimately made symbolic thought possible, and this enhanced form made sensory information easier to manipulate, in particular for purposes of language.

One of the brain structures involved in creating these metarepresentations would be the inferior parietal lobe, which is one of the youngest regions in the brain, in terms of evolution. In humans, this lobe is divided into the angular gyrus and the supramarginal gyrus, and both of these structures are fairly large. Just beside them is Wernicke’s area, which is unique to human beings and is associated with understanding of language.

According to Ramachandran, the interaction among Wernicke’s area, the inferior parietal lobe (especially in the right hemisphere), and the anterior cingulate cortex is fundamental for generating metarepresentations from sensory representations, thus giving rise to qualia and to the sense of a “self” that experiences these qualia.
http://thebrain.mcgill.ca/flash/a/a_12/a_12_cr/a_12_cr_con/a_12_cr_con.html
It looks like he's talking about the evolution of representations (consciousness), how they get more complex, when there is already an initial representation to work with. Id like to know where that initial representation came from.
 
  • #23
apeiron said:
I can supply a bookshelf of references.
Sure, but will it be relevant to this thread? I don't think so, but will open a new thread. :wink:

JDStupi said:
It seems as though some are speaking about a specifically human sense of self, the one we are aquainted with and experience, and are speaking about the necessary conditions for the existence of our sense of self, not neccessarily any sense of self.
Well said. So the question: is the hard problem hard because of what is making our sense of self specific to humans, or because of the part shared with many animals?

apeiron said:
An animal just "is" conscious.
This is directly and specifically relevant to the thread. The problem with any approach putting to much emphasized on langage is that all what make the consciousness a hard problem is already present when we're looking at this 'just' conscious phenomena. That's why I don't see how Vygostky can be relevant to this question. Not to be rude in anyway, nor to say it's not interesting for some purposes. I'm just saying langage is not the good place to start with when one is interested in the hard question of subjectivity.

apeiron said:
the hard problem argument has in fact been incredibly damaging in my view. (...) Once you understand the nature of global constraints and downward causality, the hard problem evaporates IMHO.
Well if you dismiss the hard problem I can understand why you didn't get my point. I don't think you should dismiss the existence of the hard problem based of some misguided attempts to solve it, no more than Newton's attempts with alchemestry should dimiss his idea about gravitation.

apeiron said:
Of course there is still the objective description vs the subjective impression distinction. But that applies to all modelling because that is what modelling is about - objectifying our impressions.
Deeper than that, IMHO. For example, how can we know that someone likely in coma is or is not conscious? In retrospect we know some have been misclassified because we found a way to communicate with them. But will we one day be able to say that someone is NOT conscious 'just' by looking his brain activity?
 
  • #24
pftest said:
I don't deny that representations exist, but i do not think they exist outside of consciousness. Physically, neurons (or other physical objects) just consist of atoms (and the elementary particles that they consist of) and those in themselves do not represent something, unless it is in the context of a mind utilising them, as with those 3 types of representations you quote. The neurons or single cells are located within the brain and so they are spoken of as if they supply information to it and thus the mind.

Yeah, I adopt the stance of identity theory:

the theory simply states that when we experience something - e.g. pain - this is exactly reflected by a corresponding neurological state in the brain (such as the interaction of certain neurons, axons, etc.). From this point of view, your mind is your brain - they are identical" http://www.philosophyonline.co.uk/pom/pom_identity_what.htm​

A neural representation is a mental representation.

It looks like he's talking about the evolution of representations (consciousness), how they get more complex...

No, he is claiming it is with the manipulation of metarepresentations ('representations of representations') that we engage in consciousness. Many other other species have representational capacities, but not to the extent of humans.
 
Last edited:
  • #25
Lievo said:
Sure, but will it be relevant to this thread? I don't think so, but will open a new thread. :wink:

Well said. So the question: is the hard problem hard because of what is making our sense of self specific to humans, or because of the part shared with many animals?

This is directly and specifically relevant to the thread. The problem with any approach putting to much emphasized on langage is that all what make the consciousness a hard problem is already present when we're looking at this 'just' conscious phenomena. That's why I don't see how Vygostky can be relevant to this question. Not to be rude in anyway, nor to say it's not interesting for some purposes. I'm just saying langage is not the good place to start with when one is interested in the hard question of subjectivity.

Well if you dismiss the hard problem I can understand why you didn't get my point. I don't think you should dismiss the existence of the hard problem based of some misguided attempts to solve it, no more than Newton's attempts with alchemestry should dimiss his idea about gravitation.

Deeper than that, IMHO. For example, how can we know that someone likely in coma is or is not conscious? In retrospect we know some have been misclassified because we found a way to communicate with them. But will we one day be able to say that someone is NOT conscious 'just' by looking his brain activity?

Generally here you make a bunch of expressions of doubt, which require zero intellectual effort. I, on the other hand, have made positive and specific claims in regards of the OP (which may have been rambling, but is entitled "What or where is our real sense of self?")

So what I have said is start by being clear about what "conciousness" really is. The hard part is in fact just the structure of awareness. The human socially evolved aspects are almost trivially simple. But you have to strip them away to get down to the real questions.

Now on to your claim that my attempts to solve or bypass the hard problem by an appeal to systems causality are misguided. Care to explain why? Or should I just take your word on the issue :zzz:. Sources would be appreciated.
 
  • #26
apeiron said:
Now on to your claim that my attempts to solve or bypass the hard problem by an appeal to systems causality are misguided. Care to explain why?
Sorry I was unclear. Maybe this explain your tone? I was trying to express the idea that your point (the hard problem argument has in fact been incredibly damaging in my view) should not push you to evacuate the very existence of a hard problem.

apeiron said:
So what I have said is start by being clear about what "conciousness" really is.
Yep. My point is simply that the problem is, IMHO, not where you think it is. If we were able to construct a robotic spirit as good as, say, the one of a Chimp, then I doubt the human version would remain a problem. I may be wrong and that Vigotsky will then be needed, but the fact is that we can't reach this point now. So obviously there is something we do not understand, and obviously this can't be answered by an approach that put all emphazise on human language. If you still think there is no problem except for simulating human spirit specifically, then let's agree to disagree.
 
Last edited:
  • #27
Lievo said:
Sorry I was unclear. Maybe this explain your tone? I was trying to express the idea that your point (the hard problem argument has in fact been incredibly damaging in my view) should not push you to evacuate the very existence of a hard problem.

I don't think you have explained why a systems approach does not in fact evacuate(!) the very existence of the hard problem. I await your elucidation.
 
  • #28
So the question: is the hard problem hard because of what is making our sense of self specific to humans, or because of the part shared with many animals?

I would surmise that it would be the part shared by both. The hard problem being based around the divide between "what it feels like" and what its "objective" manifestations are, the animal most likely does have qualia and as such there is a hard problem for it as there is for us.

[...]The problem with any approach putting too much emphasized on langage is that all what make the consciousness a hard problem is already present when we're looking at this 'just' conscious phenomena[...] I'm just saying langage is not the good place to start with when one is interested in the hard question of subjectivity.

What if you were to expand the idea of starting with "language" to the idea of starting with symbols? What if you were to expand the notion of starting with "symbols" to the idea of generalization? What is the relationship between generalization and symbolism?

For example, how can we know that someone likely in coma is or is not conscious? In retrospect we know some have been misclassified because we found a way to communicate with them. But will we one day be able to say that someone is NOT conscious 'just' by looking his brain activity?

I don't see why this is that far out of the reach of science at all. I am not entirely familar with brain imaging techniques, but I would imagine that they would get more sophisticated, not only technologically, but also in our ability to interpret them. I don't see this as the "hard problem" though, this is isn't tied up with the completely internal subjective description, this is a form of inferring from knowledge about relations to saying whether or not somebody is "conscious" and doesn't seem too much more complex than saying that somebody won't be able to do x, y, or z based on knowledge about brain damage.

Here, I do not claim to have more knowledge of what constitutes "awareness", and the relationships between "awareness" and "subjectivity", but it does seem that we must trace it biologically much further down than humans in order to properly understand it.


Lots of research has been conducted into the nature of representations. It is taken as axiomatic that representations exist. Proponents of embodied cognition (http://en.wikipedia.org/wiki/Embodied_cognition) would disagree, but I really, really dislike their views

Here, my question to you, is why do you "really really dislike" the views of embodied cognition theorists? I do not wish to challenge your position, because admittedly I do not have a large knowledge of embodied cognition and, as such, my conceptions of it may be erroneous. That said, from what I have read they seem to have extremely interesting views that have the possibility of poviding fresh perspective in cognitive research. While you may not subscribe completely to the views of the school, it may be silly to throw out all insights due to certain disliked ones. The ideas that we are rooted in our bodies and environment and are continually in interaction with both of them and our cognitive abilities arise through interactions with our body and environment does not seem like something that should be that controversial. Looking at it biologically it actually seems quite plausible. I do not know their views on representations, though, and it seems whatever their view may be on that is what you dislike.
It seems to me that embodied cognition has grown away from the "computational" approach to mind which equates the mind to a computer and looks at it in terms of syntactical operations. To me, it seems as though adaptivity is a necessary condition for intelligence. Adaptivity is a notion that is not understandeable except through interactions with an einvironment, which is "mediated" by the body (and I quote mediated because the wording could come out to seem as though there is some "us" mediated by our body, which will lead us into some complext substance dualism"). The idea that we do not have a computational database of "input-output" activity that is stored in some "memory" form through environment interactions, simply doesn't seem as efficient as some general adaptive means of learning based around patterns and interaction with the environment. However, if one were to say "there are no representations" I would disagree, for what I see as the necessity for representation by virtue of the way in which we know our brain to operate.
In terms of your list of representations, it would seem to me that considering areas of embodied cognition and its proposals about situated thinking would lead to a better understanding of the temporal representation of sequences and patterns of neuronal networks interacting with the environment/body and each other through time.

As a side note, what I think will be, possibly already is, a growing and important field of study for these questions is some general mathematical theory of learning. It certainly seems as though mathematically modelling some learning process as simple as Pavolovian conditioning and then extending it to the interaction of similar learning processes and building networks of such processes would be greatly beneficial to the understanding of learning.
 
  • #29
apeiron said:
why a systems approach does not in fact evacuate(!) the very existence of the hard problem.
Because you can describe a philosophical zombie using a system approach. Because a system approach would not give us a criterion to prove someone unconscious. Because there is no need for subjectivity to explain function. All the same answer.

I guess you do not need any reference, but let's put it anyway:
http://consc.net/online/1.2d
 
  • #30
Now apeiron, I'm just asking if I have this right: You are saying that a complex systems perspective on the nature of awareness, human or otherwise, dissolves the "hard problem" because due to its multi-layered, multi-directional causal explanations the nature of the divide between neural machinery and experience is largely closed. Despite this, though, some would claim that the "hard problem" still exists because, in principle, what it is like to "feel" something first hand is still unexplained. To this you would agree, but say that this "hard problem" will remain unsolved for the sole reason that experience comes first and all theories and attempts at "objectification" are the result of second-order mappings of experience. Subjective experience, by defintion, can only be felt by one and so the hard problem in that sense will not be solved, but this is accepted axiomatically and we proceed to model further. So in this sense, the hard problem is solved because the systems perspective provides a non ad-hoc view about the emergence of such sophisticated aspects of reality, but by its starting point as modelling it is unable to claim to "reduce" first person experience and this is not the aim.
It is largely an epistemological acceptance that "the hard problem" is not necessarily a "problem" at all, but simply a fact of nature.
If I have some of this wrong, or you disagree/agree/ have comments please feel free to share, I'm simply trying to see everyones positions and am interested in a dynamical perspective on things, as you seem to promote.

__________________________________________________________________________
*figured I would edit rather than add a whole new post*

To Lievo

Because there is no need for subjectivity to explain function.

As a side note, you may want to read what Vygotsky has to say about the relation between subjectivity and function in the essay I posted before. He speaks about subjectivity as emerging from a "proprioceptive" field of reflexological rections ("Internal reflex arcs"), which by virtue of being regulations of the internal environment/state of the system itself can aid in the explanation of why subjective processes are only accessible to the individual alone. Certainly, it would seem to me, that such broad behavioural, reflexological views on subjectivity and awareness are not human-centric and can be applied elsewhere.
 
Last edited:
  • #31
JDStupi said:
I would surmise that it would be the part shared by both. The hard problem being based around the divide between "what it feels like" and what its "objective" manifestations are, the animal most likely does have qualia and as such there is a hard problem for it as there is for us.
That's exactly my point. :approve:

JDStupi said:
What if you were to expand the idea of starting with "language" to the idea of starting with symbols? What if you were to expand the notion of starting with "symbols" to the idea of generalization? What is the relationship between generalization and symbolism?
What a great question! Yes, I do believe that the ability to generalise is the good place to start with. Not sure it'll solve the problem however, but that's what I'm trying.----------

As a side note, you may want to read what Vygotsky has to say about the relation between subjectivity and function in the essay I posted before. He speaks about subjectivity as emerging from a "proprioceptive" field of reflexological rections ("Internal reflex arcs"), which by virtue of being regulations of the internal environment/state of the system itself can aid in the explanation of why subjective processes are only accessible to the individual alone. Certainly, it would seem to me, that such broad behavioural, reflexological views on subjectivity and awareness are not human-centric and can be applied elsewhere.
Well, another obvious problem is that a spinal section supress proprioception but has never prevented anyone to be conscious. I'll take a look anyway. ;)
 
Last edited:
  • #32
Lievo said:
Deeper than that, IMHO. For example, how can we know that someone likely in coma is or is not conscious? In retrospect we know some have been misclassified because we found a way to communicate with them. But will we one day be able to say that someone is NOT conscious 'just' by looking his brain activity?

This is of course a topic with a large literature as organ harvesting depends on an acceptable criteria for brain death.

The answer of the 1968 Harvard Brain Death Committee was that a loss of the brain's global integrative capacity was the key measure. So not actually something easy to measure, but clearly a systems-based approach to the what would be the right answer.

There are dissenting views. UCLA epilepsy specialist Alan Shewmon has argued death has not truly occurred until life processes have ceased even at the cellular or mitochondrial level.

And while the Catholic Church seems happy with "the irretrievable loss of the brain's integrative capacity" as the point where the immortal soul departs the material body :smile:, other faiths, such as Japanese Shintoism and Judaism, consider the body to be sacred and still alive while there is any sign of on-going physiological activity.

So in an arena of life where the answer in fact matters, the systems approach (rather than QM or panpsychism or whatever) was the best scientific answer.
 
  • #33
JDStupi said:
Now apeiron, I'm just asking if I have this right: You are saying that a complex systems perspective on the nature of awareness, human or otherwise, dissolves the "hard problem" because due to its multi-layered, multi-directional causal explanations the nature of the divide between neural machinery and experience is largely closed.

Correct. Though it would be a claim still to be argued out in full detail of course.

I would also not frame it as local neural machinery~global experiencing as this reintroduces the very dualism that I want to avoid. Clearly these are words for two quite different things, so how can the rift have been healed?

Instead, based on systems arguments such as Stan Salthe's scalar hierarchy theory, I would talk about things that are the same essentially, but which become different simply because they act from opposing extremes of scale.

So for example, there is Stephen Grossberg's ART neural nets where the global scale (long term memory) acts downwards to constrain the actions of the local scale (short term memory). The causality flows both ways as of course the actions of short term memory act upwards in a constructive, additive, fashion to adjust the state of long term memory. The system learns in this fashion.

And quite explicitly, the two levels of action are not dualistically different but fundamentally the same. Both are "memory". Everything else arise because memory is acting across different spatiotemporal scales, forcing one to be the prevailing context, the other the momentary perturbation of a localised event.

It is the same with Friston's Bayesian brain networks. The system generates global predictive states that constrain from the top down. Then it learns from localised errors of prediction that feed back up to adjust the systems future predictions.

I like to use language even more descriptive of "experiencing" that still respects the background systems modelling. So I often talk about ideas (instead of long term memories) that act downwards to constrain, and impressions (instead of short term memories) that act upwards to construct (the ideas that are the long-run, predictive, constraints).

So there is indeed a closed loop that leaves nothing out causally (so it is claimed, and the argument against it would have to show what in fact is being left out).

Despite this, though, some would claim that the "hard problem" still exists because, in principle, what it is like to "feel" something first hand is still unexplained. To this you would agree, but say that this "hard problem" will remain unsolved for the sole reason that experience comes first and all theories and attempts at "objectification" are the result of second-order mappings of experience.

Correct again. At the end of the day, we may indeed feel that the detail of our visual system explains why we experience blackish yellow as brown. So we can end up with a heck of a lot explained. But not then why brown is brownish. Or red, reddish.

So what is the nature of this final residue of difficulty? Really I think it comes down to the fact that models fail when they can no longer make distinct measurements.

Now for forest green, scarlet and navy, you have three colours that can be subjectively seen as mixtures. And this can be contrasted with colours that seem like pure hues. So put together the neuroscience story on the visual pathways (surround effects that blacken primary hues) and the subjectively observable difference, and the phenomenology seems legitimately explained away.

However for pure hues, it is no longer possible to subjectively experience an alternative. You can't imagine that instead of red, you might see bibble, or rudge, or whatever. There is no subjective measurement of "other" that can be made, even if perhaps there is a very full objective neuroscience account of everything your visual paths happen to be doing.

With brown, you can in fact surprise yourself that it is actually yellow you are looking at if you have the right lab lighting conditions. If you darken the surround in the right way, a chocolate bar becomes yellowish. And there is also an exact explanation of why in the way the visual system manufactures yellow as an opponent hue to blue at a later stage of the processing chain.

However brown still looks brown subjectively. And why it looks like that rather than rudge or bibble is again back into the area of no longer being able to subjectively observe a difference. A neuroscientist could give you all the model you want, but you can't make a measurement that would show, tweak this, tweak that, and your subjective state changes accordingly.
 
  • #34
Lievo said:
Because you can describe a philosophical zombie using a system approach.

Can you cite a reference to support this claim. Or would you prefer to make the argument yourself?

In the meantime, here is yet another voice speaking the systems message. Just a recent paper from someone reinventing the wheel on the subject, but it shows what is going on out there in research land.

Abstract
Mental causation is a philosophical concept attempting to describe the
causal effect of the immaterial mind on subject´s behavior. Various types of causality have different interpretations in the literature. I propose and explain this concept within the framework of the reciprocal causality operating in the brain bidirectionally between local and global brain levels.

While committing myself to the physical closure assumption, I leave room for the suggested role of mental properties. Mental level is viewed as an irreducible perspective of description supervening on the global brain level. Hence, mental causation is argued to be interpreted as a convenient metaphor because mental properties are asserted to be causally redundant.

Nevertheless, they will eventually help us identify and understand the neural and computational correlates of consciousness. Within cognitive science, the proposed view is consistent with the connectionist and dynamic systems paradigms, and within the philosophy of mind, I see it as a form of non-reductive physicalism.

http://www.bicsconference.org/BICS2010online-preprints/ModelsOfConsciousness/36.pdf
 
  • #35
Hi Metarepresent and welcome to the forums!

You can do a search in the forums, this kind of https://www.physicsforums.com/search.php?searchid=2539069 was discussed here many times.

Basically to have a sense of self you must have a single unified mental state, so for example a person suffering from multiple personality disorder or a split brain patient do not experience the same sense of self as other people. They have two or multiple mental states. But again they are conscious just as we are. No matter which self is present, it is just as conscious as the others. So this sense of self seems to be elusive, epiphenomenal.
[PLAIN]http://kostas.homeip.net/reading/Science/Ramachandran/PhantomsInTheBrain.htm said:
So[/PLAIN] I hooked up the student volunteers to a GSR device while they stared at the table. I then stroked the hidden hand and the table surface simultaneously for several seconds until the student started experiencing the table as his own hand. Next I bashed the table surface with a hammer as the student watched. Instantly, there was a huge change in GSR as if I had smashed the student's own fingers.


Nowadays, the two materialistic theories about consciousness (mind-brain identity and functionalism) claim that consciousness is either in the neurons or in the functioning of the brain, so in the first case human brain like creatures are marked as conscious, while in the second every "creature" functioning in a certain way could be conscious. In the mind-brain identity theory mental states are reduced to physical, while in the functionalism the mental is something which emerges as a property of the physical, but can't be reduced to it. You can read what follows from these two views here - a great analysis by Jaegwon Kim.
[PLAIN]http://www.iep.utm.edu/mult-rea/ said:
They[/PLAIN] could (a) deny the causal status of mental types; that is, they could reject Mental Realism and deny that mental types are genuine properties. Alternatively, they could (b) reject Physicalism; that is, they could endorse the causal status of mental types, but deny their causal status derives from the causal status of their physical realizers. Or finally, they could (c) endorse Mental Realism and Physicalism, and reject Antireductionism.
 
Last edited by a moderator:
  • #36
apeiron said:
Can you cite a reference to support this claim.
https://www.physicsforums.com/showpost.php?p=3135015&postcount=29".

apeiron said:
This is of course a topic with a large literature as organ harvesting depends on an acceptable criteria for brain death (...) the systems approach (rather than QM or panpsychism or whatever) was the best scientific answer.
First news the current practices are based on a philosophical system approach! I'd have say it is based on common sense and that no one give 'the system approach ' a **** when it comes to medical practices. Always happy to learn something.
 
Last edited by a moderator:
  • #37
Metarepresent said:
Also, before continuing this long diatribe: what are the best phenomenal characteristics we may attribute to consciousness? I believe unity, recursive processing style and egocentric perspective are the best phenomenal target properties attributed to the ‘self’.
Hi Meta. I must have missed this one. What Ramachandran and others are referring to when they refer to the "phenomenal characteristics" of consciousness is in fact what I was referring to previously. Phenomenal consciousness regards the phenomenal characteristics, what something feels like, the experience of it, the subjective experience that we have. The sense of unity is a phenomenal characteristic in the sense that it is a phenomena that is subjective and it is a feeling or experience. I don’t know exactly what you mean by recursive processing style but I presume this regards how neurons interact and/or the various higher order synaptic connections within the brain. If that’s the case, then this wouldn’t be a phenomenal characteristic of consciousness, it would be a psychological one (for lack of a better description) since it would be objectively definable and knowable. How neurons interact and give rise to conscious experience or metarepresentations is not a phenomenal characteristic of the brain or mind. The experience itself and metarepresentations are phenomenal characteristics. But how neurons interact to give rise to these phenomenal characteristics is not. They are purely neurological characteristics that can be objectively defined. Clearly, Ramachandran takes this view. See for example this YouTube video:


Metarepresent said:
Yeah, I adopt the stance of identity theory:

the theory simply states that when we experience something - e.g. pain - this is exactly reflected by a corresponding neurological state in the brain (such as the interaction of certain neurons, axons, etc.). From this point of view, your mind is your brain - they are identical" http://www.philosophyonline.co.uk/pom/pom_identity_what.htm​

A neural representation is a mental representation.

No, he is claiming it is with the manipulation of metarepresentations ('representations of representations') that we engage in consciousness. Many other other species have representational capacities, but not to the extent of humans.

That’s correct. Ramachandran takes a very basic, uncontroversial computationalist approach to consciousness. I think first you need to clarify the difference between phenomenal characteristics though. Ramachandran is pointing out that certain circuits in the brain create phenomenal consciousness and it is these circuits that are ‘in control’ in some fashion. Further, he is suggesting as I’m sure you are aware, that there is what’s commonly referred to as “a one-to-one” relationship between the neuronal activity and the phenomenal experience that is responsible for this mental representation. Ramachanadran takes the fairly uncontroversial stance that the overall phenomenal experience (and the many parts of that experience) is what equates to the mental representation.

All this isn’t to say that our present approach to consciousness is without difficulties as Ramachandran points out in the video. Some very serious issues that pop out of this approach include “mental causation” and the problems with downward causation (ie: strong downward causation) and the knowledge paradox. So once we understand the basic concept of mental representation as presented for example by Ramachandran, we then need to move on to what issues this particular model of the human mind gives rise to and look for ways to resolve those issues.
 
Last edited by a moderator:
  • #38
JDStupi said:
It seems as though some are speaking about a specifically human sense of self, the one we are aquainted with and experience, and are speaking about the necessary conditions for the existence of our sense of self, not neccessarily any sense of self.

... On one hand, when we introspect or look at our sense of self we find that our I is built upon a vast network of memories, associative connections, generalized concepts, a re-flective (to bend back upon) ability, and embedding within a specific socio-cultural milieu. On the other hand we see that animals possesses subjective states such as pain, pleasure etc etc, which leads us to believe that animals too have an I The problem then is in what sense is the animal I and the human I the same in meaning or reference?

Lievo said:
So the question: is the hard problem hard because of what is making our sense of self specific to humans, or because of the part shared with many animals?

... The problem with any approach putting to much emphasized on langage is that all what make the consciousness a hard problem is already present when we're looking at this 'just' conscious phenomena...


My comments were certainly aimed at what’s specifically human in “consciousness” and “sense of self.” I want to explain why that seems to me the fundamental issue and how it is bound up with language. I’m sorry for the length of this post... I’d be briefer if I thought I could be clear in less space.

First, I would say the “hard problem” is hard only because of our confusion between what’s specific to humans and what we share with other animals. When we treat “consciousness” as synonymous with “awareness” and talk about animals “possessing subjective states such as pain, pleasure, etc” or “having an ‘I’” we’re doing something that’s very basic to human consciousness – i.e. projecting our own kind of awareness on others, imagining that they see and feel and think like we do.

This is basic to our humanity because it’s the basis of human communication – experiencing other people as “you”, as having their own internal worlds from which they relate to us as we relate to them. This is what’s sometimes called “mindreading” in psychology. And if you go back to a primitive enough level, it’s something we share with other mammals, who are certainly able to sense and respond to what another animal is feeling.

This is one of a great many very sophisticated, highly evolved functions of the mammalian brain, that let animals focus their attention according to all kinds of very subtle cues in their environment. These abilities are quite amazing in themselves, but if you talk to biologists or ethologists I doubt you’ll find many who believe there is a “hard problem” here. It’s clear why these capacities evolved, it’s clear how evolution works, and there’s no mystery any more about the relationship of evolving self-reproducing organisms to the physical world.

Now what’s special about humans is not our neural hardware but the software we run on it, namely human language. It’s not at all similar to computer software (and our neural hardware is equally different from computer hardware) – but even so the term is a good one, because it distinguishes between what’s “built in” to our brains genetically and what’s “installed” from the outside, through our interpersonal relationships, as we grow up.

Part of the problem we have in understanding the situation with “consciousness” is that by now this software has evolved to become extremely sophisticated – it gives us all kinds of tools for directing our attention and responding to our environment that go far beyond what other animals do. And we were already very well versed in using much of this attention-focusing (“representing”) software when we were only 3 years old, long before we ever began to think about anything. We learned to use words like “you” and “me” long before we had any capacity for reflection.

So one lesson is – there’s no absolute difference between humans and other animals, just as there’s none between living beings and non-living matter. In both cases there comes to be a vast difference, because of a long, gradual evolutionary process that developed qualitatively new kinds of capacities.

Another lesson is – when we try to think reflectively about ourselves and the world, we’re using sophisticated conceptual tools that are built on top of even more sophisticated linguistic software, through which we interpret the world unconsciously, i.e. automatically. For example, we automatically tend to project our own kind of awareness on other people and animals and even (in many pre-industrial cultures) on inanimate objects in the world around us.

So as to the “hard problem” – when we humans look around and “perceive” things around us, we’re just as completely unconscious of the vast amount of linguistic software-processing involved, as we are of the vast amount of neural hardware-processing that’s going on. It’s very easy to imagine we’re doing something like what a cat does when it looks around, “just seeing.” And if we start thinking about it reflectively, we may very easily convince ourselves that the cat “must have an internal subjective world of sensation” or even “a self.” And now we do have a hard problem, one that will never be solved because its terms will never even be clearly defined.

We end up talking about “what it’s like to be a bat,” for example, as if that clarified the issue. But the difference between what it’s like to be you and what it’s like to be a non-talking animal is on the same order of magnitude as the difference between a stone and a living cell. It’s just that we’re not as far along at understanding our humanity as we are at understanding biology.
 
  • #39
JDStupi said:
The hard problem being based around the divide between "what it feels like" and what its "objective" manifestations are, the animal most likely does have qualia and as such there is a hard problem for it as there is for us.

Here, I do not claim to have more knowledge of what constitutes "awareness", and the relationships between "awareness" and "subjectivity", but it does seem that we must trace it biologically much further down than humans in order to properly understand it.


I agree that there's an aspect of the subject/object divide that goes back a long way in evolution. There’s a basic difference in perspective between seeing something “from outside” – the way we see objects – and seeing “from inside”, from one’s own point of view.

In fact, I think this kind difference is important even in physics. Physics has it’s own “hard problem” having to do with the role of “the observer” in quantum mechanics. It seems that not even the physical world is fully describable objectively, “from outside” – at a fundamental level, it seems that you have to take a point of view inside the web of physical interaction in order to grasp its structure.

So it may well turn out be meaningful not only to talk about the point of view of an animal, but the point of view of an atom.

The problem with treating this as a problem of “consciousness” – as even some reputable physicists do, sadly – is what I was trying to get at in my previous post. When we do that, we unconsciously import all kinds of unstated assumptions that an animal’s point of view on the world or an atom’s must be similar to our own.

Before anyone begins to ask questions about the relation of our conscious experience to that of an animal, I would recommend that they spend some time thinking about the very great differences that exist between the conscious experience of humans, say, in oral cultures and in literate culture. (Look up Walter Ong’s work or Eric Havelock’s, for example.) That helps give a sense of how radically different one’s “consciousness” and “sense of self” can be, even when the underlying language has hardly changed.

So the gist of my position is not that only humans have some sort of “internal life” going on in their brains. It’s that we tend to use terms like “consciousness” and “subjectivity” and “representation” to cover a vast range of very different things. If we want to understand these issues better, it’s the differences we should be focusing on.
 
  • #40
So the gist of my position is not that only humans have some sort of “internal life” going on in their brains. It’s that we tend to use terms like “consciousness” and “subjectivity” and “representation” to cover a vast range of very different things. If we want to understand these issues better, it’s the differences we should be focusing on.

Certainly, this is exactly what I was getting at, and I completely agree. When I asked in what way the two "I"'s were the same in meaning or reference, essentially what I was getting it is what you said: that the ideas of "I","awareness", and "self" are not particulars we refer to with established defintions , but rather broad classes of things. We cannot seek some one explanatory principle that will illuminate the "nature of consciousness/awareness" and attempt to apply it to everything, for the sole reason that "something that explains everything, explains nothing", everything will just blend together in some indistinguishable blob, rather than being able to see the differences and their working interrelations and from this further sharpen our ideas on these subjects.


The problem with treating this as a problem of “consciousness” – as even some reputable physicists do, sadly

I agree with this also, those physicists who jump from the measurement problem's interactive aspects directly to "consciousness must be the source of weirdness" are making terribly large unjustified jumps and are suffering terribly from "Good Science/Bad Philosophy". But we shouldn't get into that here.

So as to the “hard problem” – when we humans look around and “perceive” things around us, we’re just as completely unconscious of the vast amount of linguistic software-processing involved, as we are of the vast amount of neural hardware-processing that’s going on. It’s very easy to imagine we’re doing something like what a cat does when it looks around, “just seeing.” And if we start thinking about it reflectively, we may very easily convince ourselves that the cat “must have an internal subjective world of sensation” or even “a self.”

This much I agree with also. In principal, I agree that there exists a "hard problem of consciousness" for both humans and animals, because I do believe that animals have qualia and internal states. Though, what you touch upon is that methodologically, in our present state of knowledge, it would be silly to attempt to come up with a defintive criterion for "awareness" and "self" and the like, that we must attempt to come up with some means of description that is somewhat independant of our folk-psychological notions of consciousness. These questions will only be answered by examining the many variegated processes that occur at all levels of biological organization, and moreover, studying specifically human forms of consciousness will most certainly shed light on the issues, because we will come to understand the nature of our perception better, and the ideas generated in the study of the emergence of human awareness and "self" will IMO undoubtedly prove usefull for ideas in other biological fields.

And yes, taking cues from older Gestalt views on psychology, the nature of our internal awareness is structured in various gestalts, our perception of the moment is not the sum total of external qualia, but rather an integrated gestalt which is the structure of our internal experience. Drastically different cultures which focus on drastically different aspects of human life will almost certainly have a completely different structuring of their internal perceptual manifold. This is relevant because, how could we hope to understand the nature of "what it is like to be a bat" from an internal perspective if we can barely understand what it is like to be in a completely different culture, or even what it is like to be the guy next door. This, I believe, is simply an unsolvable problem.
 
  • #41
Lievo said:

You claimed to have a specific argument against a systems approach. Let's hear it. Or if you want to evade the question by pointing at a page of links, identify the paper you believe supplies the argument.

Lievo said:
First news the current practices are based on a philosophical system approach! I'd have say it is based on common sense and that no one give 'the system approach ' a **** when it comes to medical practices. Always happy to learn something.

Again you supply your personal and unsupported reaction against an actual referenced case where people had to sit down and come up with a valid position on a fraught issue. It may indeed seem common sense to then arrive at a systems definition of consciousness here, but that rather undermines your comment, doesn't it?
 
Last edited by a moderator:
  • #42
ConradDJ said:
So the gist of my position is not that only humans have some sort of “internal life” going on in their brains. It’s that we tend to use terms like “consciousness” and “subjectivity” and “representation” to cover a vast range of very different things. If we want to understand these issues better, it’s the differences we should be focusing on.

This is precisely what people who want to debate consciousness should demonstrate a grasp of - the rich structure of the mind.

The hard problem tactic is to atomise the complexity of phenomenology. Everything gets broken down into qualia - the ineffable redness of red, the smell of a rose, etc. By atomising consciousness in this way, you lose sight of its actual complexity and organisation.

Robert Rosen called it isomerism. You can take all the molecules that make up a human body and dissolve them in a test tube. All the same material is in the tube, but is it still alive? Clearly you have successfully reduced a body to its material atoms, its components, but have lost the organisation that actually meant something.

Philosophy based on qualia-fication does the same thing. If you atomise, you no longer have a chance of seeing the complex organisation. So while some find qualia style arguments incredibly convincing, this just usually shows they have not ever really appreciated the rich complexity of the phenomenon they claim to be philosophising about.
 
  • #43
ConradDJ said:
My comments were certainly aimed at what’s specifically human in “consciousness” and “sense of self.”
Sure, which is to me the problem. Interesting post. If I may reformulate, you're saying that we should not anthropomorphise: it's not because my brain interpret an animal as having a feeling that the animal has this feeling -my brain doesn't know what's going inside, it just interprets the external signs as human like because of the way it is wired.

Fair enough. This is not attackable in any way -which is why it's the hard problem again. But when should we stop this line of thinking? By the very same logic, you don't know that I or anyone but you is conscious. You're making an analogy with yourself, and you may well be wrong to.

Of course I don't believe this solipsism, but it's exactly the same logic for chimps. So where do we stop? At this point, you would probably say: hey but humans are humans: we know that's the same kind of brain so for our species it's correct to reject solipsism.

I like this way of thinking, and would agree I don't know what it's like to be a bat. But I'd argue that both the behavior and the brain of the Chimps are so similar to the human behavior and brain that I don't see any reason to think their thought are significantly differents.

This was not obvious at Vygotsky's time , but have a look at this:

http://www.nytimes.com/2010/06/22/science/22chimp.html?_r=1
http://www.cell.com/current-biology/fulltext/S0960-9822(10)00459-8

When I see a young chimp peeing on the dead body of the one he attacked, I tend to think his subjective experience is quite similar to humans in the same situation. I may be wrong, but find the body of ethologic data that have been collected in the recent years quite convincing.

ConradDJ said:
These abilities are quite amazing in themselves, but if you talk to biologists or ethologists I doubt you’ll find many who believe there is a “hard problem” here. (...) It’s clear why these capacities evolved, it’s clear how evolution works, and there’s no mystery any more about the relationship of evolving self-reproducing organisms to the physical world.
*cough cough*. Well I don't want to argue or discuss it further, but I promise it's not the view of most of my collegues. :wink:
 
  • #44
ConradDJ said:
Before anyone begins to ask questions about the relation of our conscious experience to that of an animal, I would recommend that they spend some time thinking about the very great differences that exist between the conscious experience of humans, say, in oral cultures and in literate culture. (Look up Walter Ong’s work or Eric Havelock’s, for example.) That helps give a sense of how radically different one’s “consciousness” and “sense of self” can be, even when the underlying language has hardly changed.
Interesting. That may qualify as a demand I ask https://www.physicsforums.com/showthread.php?t=472298". If you could turn it into specific claim supported by specific evidence, that'd be interesting.
 
Last edited by a moderator:
  • #45
apeiron said:
identify the paper you believe supplies the argument.
Sure. How could I not answer such a polite demand? However, if you don't mind, I'll wait for you to start providing the experimental evidences supporting some of your past claims.

apeiron said:
Again you supply your personal and unsupported reaction against an actual referenced case
My personal reaction is the reaction of a scientitist who has worked specifically on this issue. So, to be perfectly clear, I know that at least regarding coma, your statement is bull.
 
  • #46
Lievo said:
My personal reaction is the reaction of a scientitist who has worked specifically on this issue. So, to be perfectly clear, I know that at least regarding coma, your statement is bull.

As usual, references please. And here you seem to be saying that you can even cite your own research. So why be shy?
 
  • #47
JDStupi said:
In principal, I agree that there exists a "hard problem of consciousness" for both humans and animals, because I do believe that animals have qualia and internal states...

Drastically different cultures which focus on drastically different aspects of human life will almost certainly have a completely different structuring of their internal perceptual manifold. This is relevant because, how could we hope to understand the nature of "what it is like to be a bat" from an internal perspective if we can barely understand what it is like to be in a completely different culture, or even what it is like to be the guy next door.


JD – Thanks, what you say makes sense to me. The point is not to stop people from trying to understand the mental life of chimps or bats, if for some reason they’re inclined to do that. But when people think about consciousness, I imagine it’s usually because they’re trying to understand their own – which is of course the only mentality they’ll ever actually experience. And it’s a very bad way to start by skipping over the many, many levels of attention-focusing techniques that we learn as we grow up into language – which is pretty much what human consciousness is made of – and defining the topic of study as some kind of generic “sense of self” that chimps have but maybe mollusks don’t.

There’s no absolute dividing-line between “conscious” and “non-conscious” – or “sentient” and “non-sentient” for that matter. These are highly relative terms. But I believe there are two huge leaps in our evolutionary history, with the origin of life and the origin of human communication. The problem is that we understand the first of these relatively well, and the second almost not at all.

apeiron said:
This is precisely what people who want to debate consciousness should demonstrate a grasp of - the rich structure of the mind...

Philosophy based on qualia-fication does the same thing. If you atomise, you no longer have a chance of seeing the complex organisation.


Yes. I do think the status of “qualia” is an interesting topic – but again, this is a highly relative term. As you well understand, it’s not as though there is something like a simple, “primitive” experience of “red” we can go back to and take as a pristine starting point for assembling a conscious mind.

Lievo said:
By the very same logic, you don't know that I or anyone but you is conscious. You're making an analogy with yourself, and you may well be wrong to.

Of course I don't believe this solipsism, but it's exactly the same logic for chimps. So where do we stop? At this point, you would probably say: hey but humans are humans: we know that's the same kind of brain so for our species it's correct to reject solipsism.


No, you’re not getting my point. I’m not interested in proving that solipsism is incorrect – no sensible person needs to do that. I’m just as little interested in proving that chimps are like humans, or unlike humans, both being obviously true in various ways. How can there be a “correct” answer to the question, “Do chimps have an internal mental life like ours?”

My point is that we interpret the mentality of others by projecting, and that this is a primary human capacity. If we didn’t imagine each other as people, communication in the human sense would be inconceivable. It’s obvious to me that no one imagines anyone else’s internal world “correctly” – what would that even mean? But because we automatically imagine each other as conscious beings, we open up the possibility of talking with each other even about our private, internal experience, and it’s amazing how profound the experience of communication can seem, sometimes.

But because the projective imagination is so fundamental to us, if we want to understand anything at all about consciousness and its history, we have to focus on the differences. Unless we point to specific distinctions between different ways of being "conscious", we literally don't know what we're talking about.

For example – before Descartes, I think it’s reasonable to suppose that no one in the history of human thought had ever experienced the world as divided between an “external objective reality” and a “subjective inner consciousness”... because that difference had never before been conceptualized, never brought into human language. But today any thinking person probably experiences the world that way, because the subject/object split became basic to Western thought during the 17th century, and long ago percolated into popular language. So today it takes a real stretch of imagination to glimpse what “consciousness” was like in the Renaissance... and we can only hope to do that if we can focus on landmarks like Descartes’ Meditations and take them seriously as consciousness-changing events.

I expect that to be controversial – but that’s what I mean by “focusing on the differences” in consciousness.
 
  • #48
I was going to suggest the mirror test, but I thought it would be a good idea to try it out on myself first. Step 1 is to take a good look at yourself in the mirror. Then put a mark on your forehead and then look again in the mirror. If you don't pick at your forehead, that means either that you are not self-aware, or that you are a slob. If you do pick at your forehead it means that you are self-aware but not necessarily. First you have to consider whether you always pick at your forehead. I failed the test myself, but I'm told that chimps, dolphins, and elephants have passed it.
 
Last edited:
  • #49
ConradDJ said:
Yes. I do think the status of “qualia” is an interesting topic – but again, this is a highly relative term. As you well understand, it’s not as though there is something like a simple, “primitive” experience of “red” we can go back to and take as a pristine starting point for assembling a conscious mind.

A lot of philosophy of mind is motivated by a naive atomistic definition of qualia. My general argument against this is the dynamicist's alternative. Qualia are like the whorls of turbulence that appear in a stream. Each whorl seems impressively like a distinct structure. But try to scoop the whorl out in a bucket and you soon find just how contextual that localised order was.

Plenty of people have of course reacted against the hegemony of atomistic qualia. For instance, here are two approaches that try to emphasise the contextual nature of seeing any colour.

On this new view of the origins of color vision, color is far from an arbitrary permutable labeling system. Our three-dimensional color space is steeped with links to emotions, moods, and physiological states, as well as potentially to behaviors. For example, purple regions within color space are not merely a perceptual mix of blue and red, but are also steeped in physiological, emotional and behavioral implications – in this case perhaps of a livid male ready to punch you.

http://changizi.wordpress.com/2010/...-richard-dawkins-is-your-and-my-red-the-same/

Building on these insights, this project defines qualia to be salient chunks of human experience, which are experienced as unified wholes, having a definite, individual feeling tone. Hence the study of qualia is the study of the "chunking," or meaningful structuring, of human experience. An important, and seemingly new, finding is that qualia can have complex internal structure, and in fact, are hierarchically organized, i.e., they not only can have parts, but the parts can have parts, etc. The transitive law does not hold for this "part-of" relation. For example, a note is part of a phrase, and a phrase is part of a melody, and segments at each of these three levels are perceived as salient, unified wholes, and thus as qualia in the sense of the above definition - but a single note is not (usually) perceived as a salient part of the melody.

http://cseweb.ucsd.edu/~goguen/projs/qualia.html

And also these neo-whorfian experiments show how language actively plays a part in introspective awareness, supporting the general Vygotskian point too.

Boroditsky compared the ability of English speakers and Russian speakers to distinguish between shades of blue. She picked those languages because Russian does not have a single word for blue that covers all shades of what English speakers would call blue; rather, it has classifications for lighter blues and darker blues as different as the English words yellow and orange. Her hypothesis was that Russians thus pay closer attention to shades of blue than English speakers, who lump many more shades under one name and use more vague distinctions. The experiment confirmed her hypothesis. Russian speakers could distinguish between hues of blue faster if they were called by different names in Russian. English speakers showed no increased sensitivity for the same colors. This suggests, says Boroditsky, that Russian speakers have a "psychologically active perceptual boundary where English speakers do not."

http://www.stanfordalumni.org/news/magazine/2010/mayjun/features/boroditsky.html

But clearly there is a heck of a lot more to be said about the status of qualia. It does not seem too hard to argue their contextual nature. It is in fact an "easy problem". Harder is then saying something about where this leaves the representational nature of a "contextualised qualia".

Even as a whorl in the stream, a qualitative feeling of redness is "standing for something" in a localising fashion. It is meaningfully picking out a spot in a space of possible reactions. So we have to decide whether, using the lens of Peircean semiotics, we are dealing with a sign that is iconic, indexical or symbolic. Or using Pattee's hierarchy theory distinctions, does qualiahood hinge on the rate dependent/rate independent epistemic cut?

In other words, we cannot really deal satisfactorily with qualia just in information theoretic terms. The very idea of representation (as a construction of localised bits that add up to make an image) is information based. And even if we apply a "contextual correction", pointing out the web of information that a qualia must be embedded in, we have not solved the issue in a deep way...because we have just surrounded one bit with a bunch of other bits.

What we need instead is a theory of meaning. Which is where systems approaches like semiotics and hierarchy theory come in.
 
  • #50
Goguen is a good source for a dynamicist/systems take on qualia and representations.

http://cseweb.ucsd.edu/~goguen/projs/qualia.html

You can see how this is a battle between two world views, one based on bottom-up causal explanations (which start off seeming to do so well because they make the world so simple, then eventually smack into hard problems when it becomes obvious that something has gone missing), the other based on a systems causality where there is also a global level of top-down constraint, and so causality is a packaged deal involving the interactions between local bottom-up actions, and global top-down ones.

Anyway some snippets from a Goguen paper on musical qualia which illustrates what others are currently saying (and using all the same essential concepts that I employ such as hierarchy, semiosis, anticipation, socialisation, contextuality...).

http://cseweb.ucsd.edu/~goguen/pps/mq.pdf

Musical Platonism, score nominalism, cognitivism, and modernist approaches in general, allassume the primacy of representation, and hence all ounder for similar reasons. Context is crucial to interpretation, but it is determined as part of the process of interpretation, not independently or in advance of it. Certain elements are recognized as the context of what is being interpreted,while others become part of the emergent musical \object" itself, and still others are deemed irrelevant. Moreover, the elements involved and their status can change very rapidly. Thus, every performance is uniquely situated, for both performers and listeners, in what may be very differentways.

In particular, every performance is embodied, in the sense that very particular aspectsof each participant are deeply implicated in the processes of interpretation, potentially including their auditory capabilities, clothing, companions, musical skills, prior musical experiences, implicit social beliefs (e.g., that opera is high status, or that punk challenges mainstream values), spatiallocation, etc., and certainly not excluding their reasons for being there at all (this is consistent with the cultural historical approach of Lev Vygotsky.

Most scientific studies of art are problematic for similar reasons. In particular, the third person,objective perspective of science requires a stable collection of "objects" to be used as "data,"which therefore become decontextualized, with their situatedness, embodiment, and interactive social nature ignored.

This paper draws on data from both science and phenomenology, in a spirit similar to the "neurophenomenology" of Francisco Varela, as a way to reconcile first and third person perspectives,by allowing each to impose constraints upon the other. Such approaches acknowledge that the first and third person perspectives reveal two very different domains, neither of which can be reduced to the other, but they also deny that these domains are incompatible.

Whatever approach is taken, qualia are often considered to be atomic, i.e., non-reducible, or without constituent parts, in harmony with doctrines of logical positivism, e.g., as often attributed to Wittgenstein’s Tractatus. Though I have never seen it stated quite so baldly, the theory(but perhaps "belief" is a better term, since it is so often implicit) seems to be that qualia atoms are completely independent from elements of perception and cognition, but somehow combine with them to give molecules of experience.

Peirce was an American logician concerned with problems of meaning and reference, who concluded that these are relational rather than denotational, and whoalso made an in uential distinction among modes of reference, as symbolic, indexical, or iconic...A semiotic system or semiotic theory consists of: a signature, which gives names for sorts, subsorts, and operations; some axioms; a level ordering on sorts having a maximum element called the top sort; and a priority ordering on the constructors at each level...
Axioms are constraints on the possible signs of a system. Levels express the whole-part hierarchy of complex signs, whereas priorities express the relative importance of constructors and their arguments; social issues play an important role in determining these orderings. This approach has a rich mathematical foundation...

The Anticipatory Model captures aspects of Husserl’s phenomenology of time. For example,it has versions of both retention and protention, and the right kind of relationship between them. It also implies Husserl’s pithy observation that temporal objects (i.e., salient events or qualia) arecharacterized by both duration and unity. Since it is not useful to anticipate details very far into the future, because the number of choices grows very quickly, an implementation of protention, whether natural or artificial, needs a structure to accommodate multiple, relatively short projections, basedon what is now being heard, with weights that increase with elapsed time; this is a good candidatefor implementation by a neural net of competing Hebbian cell assemblies, in both the human and algorithmic instantiations, as well as robots (as in [71]), and it also avoids reliance on old style AI representation and planning.

This paper has attempted to explore the qualitative aspects of experience using music as data, and to place this exploration in the context of some relevant philosophical, cognitive scientific, and mathematical theories. Our observations have supported certain theories and challenged others. Among those supported are Husserl’s phenomenology of time, Vygotsky’s cultural-historical approach, and Meyer’s anticipatory approach, while Chalmers’ dualism, Brentano’s thesis on intentionality, qualia realism, qualia atomism, Hume’s pointilist time, and classical cognitivism have been disconfirrmed at least in part.

In particular, embodiment, emotion, and society are certainly important parts of how real humans can be living solutions to the symbol grounding problem. The pervasive influence of cognitivism is presumably one reason why qualia in general, and emotion in particular, have been so neglected by traditional philosophy of mind, AI, linguistics, and so on. We may hope that this is now beginning to change.

This suggests that all consciousness arises through sense-making processes involving anticipation, which produce qualia as sufficiently salient chunks. Let us call this the C Hypothesis; it provides a theory for the origin and structure of consciousness. If correct, it would help to ex-plain why consciousness continues to seem so mysterious: it is because we have been looking at it through inappropriate, distorting lenses; for example, attempting to view qualia as objective facts, or to assign them some new ontological status, instead of seeing segmentation by saliency as an inevitable feature of our processes of enacted perception.
 
Back
Top