What or where is our real sense of self?

  • #1

Main Question or Discussion Point

No one in philosophy or cognitive neuroscience has come to a general consensus about the “self”. Questions about the “self” and “consciousness” have been pestering me awhile. I believe I need more knowledge before I can truly adopt a position; hence the purpose of this message being the acquisition of knowledge. I am currently studying a Cognitive Neuroscience Textbook with a friend (he and I are both undergrad Biology majors), and we have formed a blog documenting our progress. Granted, I realize this is a question needs aid from all fields, especially philosophy.

First, let’s discuss the essential questions that must be answered in order to formulate a cohesive theory of self:

Does consciousness emerge from neural activity alone? Why is there always someone having the experience? Who is the feeler of your feelings and the dreamer of your dreams? Who is the agent doing the doing, and what is the entity thinking your thoughts? Why is your conscious reality your conscious reality? Why is consciousness subjective? Why does our perceived reality almost invariably have a center: an experiencing self? How exactly, then, does subjectivity, this “I”, emerge? Is the self an operation rather than a thing or repository? How to comprehend subjectivity is the deepest puzzle in consciousness research. The most important of all questions is how do neurons encode meaning and evoke all the semantic associations of an object?

Also, before continuing this long diatribe: what are the best phenomenal characteristics we may attribute to consciousness? I believe unity, recursive processing style and egocentric perspective are the best phenomenal target properties attributed to the ‘self’. Furthermore, there are still issues all theories of consciousness must address. These include, but are not limited to binding (i.g., how the property of coherence arises in consciousness - how are the processing-domains in the brain distributed to allow this? Wolf Singer claims that the synchronization of oscillatory activity may be the mechanism for the binding of distributed brain processes), QUALIA, Cartesian Theatre (i.g., how can we create a theory of consciousness without falling into Dennett’s dualism trap), and etc.

For some reason, the self-representational theories of subjectivity appeals to me. I believe higher-order representations have a huge role in the origin of subjectivity (as proposed as Ramachandran and others). I am VERY interested in what you have to say regarding this:

"Very early in evolution the brain developed the ability to create first-order sensory representation of external objects that could elicit only a very limited number of reactions. For example a rat's brain has only a first-order representation of a cat - specifically, as a furry, moving thing to avoid reflexively. But as the human brain evolved further, there emerged a second brain - a set of nerve connections, to be exact - that was in a sense parasitic on the old one. This second brain creates metarepresentations (representations of representations – a higher order of abstraction) by processing the information from the first brain into manageable chunks that can be used for a wider repertoire of more sophisticated responses, including language and symbolic thought. This is why, instead of just “the furry enemy” that it for the rat, the cat appears to you as a mammal, a predator, a pet, an enemy of dogs and rats, a thing that has ears, whiskers, a long tail, and a meow; it even reminds you of Halle Berry in a latex suit. It also has a name, “cat,” symbolizing the whole cloud of associations. In sort, the second brain imbues an object with meaning, creating a metarepresentation that allows you to be consciously aware of a cat in a way that the rat isn’t.

Metarepresentations are also a prerequisite for our values, beliefs, and priorities. For example, a first-order representation of disgust is a visceral “avoid it” reaction, while a metarepresentation would include, among other things, the social disgust you feel toward something you consider morally wrong or etically inappropriate. Such higher-order representations can be juggled around in your mind in a manner that is unique to humans. They are linked to our sense of self and enable us to find meaning in the outside world – both material and social – and allow us to define ourselves in relation to it. For example, I can say, “I find her attitude toward emptying the cat litter box disgusting.”
- The Tell-Tale Brain by VS Ramachandran page 247​

It is with the manipulation of meta-representations, according to Ramachandran, that we engage in human consciousness as we know it. Antonio Damasio claimed, “Our evolved type of conscious self-model is unique to the human brain, that by representing the process of representation itself, we can catch ourselves – as Antonio Damasio would call it – in the act of knowing”. I also read many other plausible self-representational theories of consciousness. For example, I find this one very convincing, due to how it places the notion of a recursive consciousness into an evolutionary paradigm:

http://www.wwwconsciousness.com/Consciousness_PDF.pdf [Broken]
 
Last edited by a moderator:

Answers and Replies

  • #2
apeiron
Gold Member
2,013
1
An entirely different school of thought (and the correct one :smile:) is that self-awareness and all the other human "higher mental abilities" are the result of language and cultural evolution. Words, as a new level of symbolic order and constraint, scaffold more elaborate forms of thinking.

Lev Vygotsky (the sociocultural school) and to a lesser extent GH Mead (symbolic interactionism) are the best sources here. The idea regularly gets rediscovered, as by philosopher Andy Clark.

So the meta-thinking is due to social evolution (call it memetic if you like) rather than biological (genetic). This is why neuroscience does not find "higher faculties" in the brain's architecture. Even though it has really, really, tried to.
 
  • #3
Pythagorean
Gold Member
4,191
255
If a single human somehow developed without any stimulus from the environment, it would seem common sense that it has no thought processes. no self.

What neuroscience can do is try to find the means by which the brain is able to engage in social activity and take advantage of it; how it stores the information, changes it, and contributes to the information ensemble that is society.

For instance, spatial metaphors are associated with spatial and parietal lobes as ways of "perceiving" things that aren't directly detectable by our senses. In the most abstract sense, we use visualization in science to transform measurable variables into perceivable spatial objects (plots or graphs). But we also use a lot of spatial metaphors to describe emotional situations and even the self ("within", "inside") as compared to the "outside" world.

Linguistic relativity definitely plays a role in this; societies (even nonhuman societies) have physically evolved to where they can organize a body of knowledge (stored in language) to augment the simpler signaling protein language of cells. I speculate that human language is possibly the most sophisticated of the animals.
 
  • #4
Q_Goest
Science Advisor
Homework Helper
Gold Member
2,974
39
Hi Metarepresent. Welcome to the board.
No one in philosophy or cognitive neuroscience has come to a general consensus about the “self”.
You said a mouthful right there. You’ll get different opinions from everyone, and more often than not, people will talk right past each other, not knowing or not understanding what the other person is talking about, so getting a solid grounding in some of the concepts before we refer to them is important.

When we talk about consiousness, there are various phenomena that we might be refering to. One person might be talking about the objective aspects of consiousness (psychological consciousness) but others might be thinking about subjective aspects of conscoiusness also known as phenomenal consciousness. The two are very different though some folks can’t or won’t differentiate between the two.

In his book "The Conscious Mind", Chalmers talks about these 2 different concepts of mind, the phenomenal and the psychological. The phenomenal aspect of mind "is characterized by the way it feels ... ". In other words, the phenomenal qualities of mind include those phenomena that form our experiences or how experience feels. The psychological aspects of mind in comparison "is characterized by what it does." The psychological aspects of mind are those things that are objectively measurable such as behavior or the interactions of neurons.

Consider phenomenal consciousness to be a set of phenomena such as the experience of the color red, how sugar tastes, what pain feels like, what hot or cold feels like, what love and anger feel like, etc ... are all phenomenal qualities of mind. So when we refer to phenomenal consciousness, we're refering to that aspect of mind that is characterized by our subjective experiences.

In comparison, we can talk about psychological consciousness which again is a set of phenomena. Psychological consciousness includes that which is objectively measurable such as behavior. We might think phenomenal consciousness and psychological consciousness are one in the same, but they pick out separate and distinct phenomena. One is subjective, the other is objective.

Chalmers states, "It seems reasonable to say that together, the psychological and the phenomenal exhaust the mental. That is, every mental property is either a phenomenal property, a psychological property, or some combination of the two. Certainly, if we are concerned with those manifest properties of the mind that cry out for explanation, we find first, the varieties of conscious experience and second, the causation of behavior. There is no third kind of manifest explanandum, and the first two sources of evidence - experience and behavior - provide no reason to believe in any third kind of nonphenomenal, nonfunctional properties ..."

So if you’re refering to how it is we experience anything, you are refering to phenomenal consciousness. If you refer to how neurons interact or how our behavior is affected by something, you are refering to psychological consciousness. The difference between the two and why phenomenal consciouness should exist at all is often what is refered to as the explanatory gap. We can explain why neurons interact in a given way but why they should give rise to phenomenal consciousness isn’t explained by explaining how neurons interact.

Does consciousness emerge from neural activity alone?
The standard paradim of mind is that consciousness is emergent from the interaction of neurons. That isn’t to say that everyone agrees this is true, but that is what is generally taken as true. It gives rise to a few very serious problems and paradoxes though, so some folks have looked to see where those logical inconsistancies have arisen and how one might reconsile them. So far, I don’t see any consensus on how to reconsile these very serious issues, so logically it would seem to me, the standard theory of consciousness doesn’t provide an adequate explanation.

Why is there always someone having the experience? Who is the feeler of your feelings and the dreamer of your dreams? Who is the agent doing the doing, and what is the entity thinking your thoughts?
This line of reasoning requires a religious type of duality, as if there were some kind of “soul” that is separate and distinct from the brain, the neurons and the bits and pieces that make up a person. But there's no need to consider this soul. Consider only that there are various phenomena created by the brain, some of which are subjective and can't be objectively measured. That's enough to explain the sense of self. The sense of self is a phenomena created by the brain, a feeling that associates the phenomenal experience with the body that is having the experience. Note that the term “duality” in philosophy can have another, very different meaning however, which has nothing to do with religion. That’s a separate issue entirely.

The most important of all questions is how do neurons encode meaning and evoke all the semantic associations of an object?
I’d mentioned earlier there are some very serious logical inconsistancies in the present theory of how phenomenal consciousness emerges from the interaction of neurons. This is one of them. Stevan Harnad calls this the “symbol grounding problem”. Basically, computational aspects of mind that require the emergence of consciousness from the interaction of neurons are considered to be “symbol manipulation systems” and as such, there is no grounding of the symbols to anything intrinsic to nature.

Regarding the reference to Ramachandran, one can ask whether or not his discussion of self-representation is a discussion about the psychological aspect of consciousness (ie: the objectively measurable one) or the phenomenal aspect. We can talk about how neurons interact, how additional neurons such as mirror neurons interact, and the resulting behavior of an animal or person, but it gets us no closer to closing the explanatory gap. Why should there be phenomenal experiences associated with any of those neuron interactions? Without explaining why these additional phenomena should arise, we haven't explained phenomenal consciousness.
 
  • #5
disregardthat
Science Advisor
1,854
33
Consider the idea of the philosophical zombie, which you probably have heard of. It seems to me that any objective theory of consciousness cannot distinguish between such zombies and human beings with subjective experience. I agree with Q-guest, it is an important distinction between the objective phenomenon of consciousness and the subjective quality to it, and it seems that the latter cannot be touched upon in any objective theory. One can imagine the existence of such zombies, and in my opinion you cannot infer that some particular individual is not such a being.
 
  • #6
apeiron
Gold Member
2,013
1
In comparison, we can talk about psychological consciousness which again is a set of phenomena. Psychological consciousness includes that which is objectively measurable such as behavior. We might think phenomenal consciousness and psychological consciousness are one in the same, but they pick out separate and distinct phenomena. One is subjective, the other is objective.
Yes, and so one is also folk lore, the other is science. I vote for the science.

And what science can tell us, for example, is that introspection is learnt and socially constructed. There is no biological faculty of "introspective awareness". It is a skill you have to learn. And what most people end up "seeing" is what cultural expectations lead them to see.

A classic example is dreams. Very few people learn to accurately introspect on the phenomenal nature of dreams.

The hard problem of consciousness really boils down to the fact that people find it too hard to study all the neurology, psychology, anthropology and systems science involved in being up to date with what is known.

It is so much easier to be a Chalmers and say all that junk is irrelevant, he has no need to learn it, because there will always be a phenomenological mystery. Lots of people love that line because it justifies a belief in soulstuff or QM coherence magic. It is the easy cop-out.

There is also a philosophy of science issue being trampled over here.

The point of scientific models is to provide objective descriptions based on "fundamentals" or universals. If you are modelling atoms or life, you are not trying to provide a subjective impression (ie: particular and local as opposed to general and global) of "what it is like" to be an atom, or a living creature.

So a clear divide between objective description and subjective experience is what we are aiming for rather than "a hard problem". We are trying to get as far away from a subjective stance as possible, so as to be maximally objective (see Nozick's Invariances for example).

Having established an "objective theory of mind" we should then of course be able to return to the subjective in some fashion.

It would be nice to be able to build "conscious" machines based on the principles discovered (for instance, Grossberg's ART neural nets or Friston's Bayesian systems). It would be nice to have a feeling of being able to understand why the many aspects of consciousness are what the are, and not otherwise (as for example why you can have blackish green, red and blue - forest green, scarlet, navy - but not blackish yellow).

But it takes a level of study that most people are not willing to undertake to appreciate exactly how much we do already know, and how far we might still have to go.
 
  • #7
Q_Goest
Science Advisor
Homework Helper
Gold Member
2,974
39
Yes, and so one is also folk lore, the other is science. I vote for the science.

...

The hard problem of consciousness really boils down to the fact that people find it too hard to study all the neurology, psychology, anthropology and systems science involved in being up to date with what is known.

It is so much easier to be a Chalmers and say all that junk is irrelevant, he has no need to learn it, because there will always be a phenomenological mystery. Lots of people love that line because it justifies a belief in soulstuff or QM coherence magic. It is the easy cop-out.

There is also a philosophy of science issue being trampled over here.

...

But it takes a level of study that most people are not willing to undertake to appreciate exactly how much we do already know, and how far we might still have to go.
Ok, so… psychological consciousness is science and phenomenal consciousness is folklore? And you vote for science. And people like Chalmers say a lot of junk because he’s copping out? And the science issue is being trampled and the level of study that you have is magnificent and other folks just aren’t up to your standards. Is that about right? That’s quite the attitude you have.

Do you know what browbeat means?
brow•beat –verb (used with object), -beat, -beat•en, -beat•ing.
to intimidate by overbearing looks or words; bully: They browbeat him into agreeing
I’m sure you’re much smarter than me or anyone else here, but how about you cut the sh*t and stop acting like a bully on the school playground.

But there seems less point jumping up and down in a philosophy sub-forum about people wanting to do too much philosophy for your tastes. We get that you can get by in your daily work without raising wider questions.
https://www.physicsforums.com/showthread.php?t=451342&+forum&page=4

To me this is demonstrating the inflexibility of thought I am talking about. Me clever scientist, you dopey philosopher. Ugh. End of story.
https://www.physicsforums.com/showthread.php?t=302413&+forum&page=2

Ok, so wait… if you are the clever scientist and I’m the dopey philosopher, then why did you say that ZapperZ was the clever scientist and you were the dopey philosopher. And why is this philosophy forum too much philosophy for you when you were arguing that you wanted to do philosophy in the philosophy forum? I’m SOOOOOOO confused.

I’m sure you have a rational explanation for all this and you won’t browbeat people like you always do. So, why haven’t you joined in the discussion about improving the philosophy forum in the science advisors forum? I haven’t seen anything from you there. It’s like you don’t seem to care about the philosophy forum or where it’s going or the new rules or anything. You really should discuss this with us. It’s up at the top of the main page under “science advisors forum”.
 
  • #8
disregardthat
Science Advisor
1,854
33
aperion said:
It is the easy cop-out.
Cop out from what? The hard problem of philosophy doesn't boil down to a lack of knowledge of neurobiology. The opinions you are attacking aren't the same as the QM-mysticism new age soul-stuff out there. They are not objecting to, or "trampling on", the science behind neurology and how it relates to consciousness, and this is not being degraded for what it is on its domain. One is simply emphasizing the important distinction between this and what we call subjective experience, which again and again seems to be interpreted as a counter-argument against, or cop-out to, scientific endeavor. Emphasizing that, and drawing the line between the subjective nature of consciousness and what actually can be concluded from objective measurable results.
 
Last edited:
  • #9
apeiron
Gold Member
2,013
1
Ok, so wait… if you are the clever scientist and I’m the dopey philosopher, then why did you say that ZapperZ was the clever scientist and you were the dopey philosopher. And why is this philosophy forum too much philosophy for you when you were arguing that you wanted to do philosophy in the philosophy forum? I’m SOOOOOOO confused.
But my position was explained quite clearly in that thread. Did you not read it? Here it is again....

To me this is demonstrating the inflexibility of thought I am talking about. Me clever scientist, you dopey philosopher. Ugh. End of story.

You are basing your whole point of view on the assumption that scientists and philosophers have ways of modelling that are fundamentally different. Yet I'm not hearing any interesting evidence as to the claimed nature of the difference.

I am putting forward the alternative view that modelling is modelling, and there are different levels of abstraction - it is a hierarchy of modelling. Down the bottom, you have science as technology, up the top, you have science as philosophy. And neither is better. Both have their uses.
And I've frequently made these points to explain my general position on "philosophy"....

1) Classical philosophy is instructive: it shows the origin of our metaphysical prejudices - why we believe what we believe. And those guys did a really good job first time round.

2) Modern academic philosophy is mostly shallow and a waste of time. If you want the best "philosophical" thinking, then I believe you find it within science - in particular, theoretical biology when it comes to questions of ethics, complexity, life, mind and meaning.

3) Unfortunately the level of philosophical thought within mind science - neuro and psych - is not sophisticated as in biology. Largely this is because neuro arises out of medical training and psych out of computational models. However give the field another 20 years and who knows?

4) I have a personal interest in the history of systems approaches within philosophy, so that means Anaximander, Aristotle, Hegel, Peirce, etc.

5) I've had personal dealings with Chalmers and others, so that informs my opinions.

I’m sure you have a rational explanation for all this and you won’t browbeat people like you always do. So, why haven’t you joined in the discussion about improving the philosophy forum in the science advisors forum? I haven’t seen anything from you there. It’s like you don’t seem to care about the philosophy forum or where it’s going or the new rules or anything. You really should discuss this with us. It’s up at the top of the main page under “science advisors forum”.
I really can't see any link to such a forum. And I certainly was not invited to take part :smile:.

As to where the philosophy forum should go, it might be nice if it discussed physics a bit more. I wouldn't mind if topics like freewill and consciousness were banned because there just isn't the depth of basic scientific knowledge available to constrain the conversations.
 
  • #10
Pythagorean
Gold Member
4,191
255
The left thinks you're right, the right thinks your left. Nobody likes a moderate.
 
  • #11
Pythagorean
Gold Member
4,191
255
  • #12
Q_Goest
Science Advisor
Homework Helper
Gold Member
2,974
39
Apeiron, the tone of your last post was much improved. I’ve already made the point that some folks can’t or won’t differentiate between psychological consciousness and phenomenal consciousness. I won’t hijack this thread to push my views and I’d ask you attempt to do the same.
 
  • #13
264
0
An entirely different school of thought (and the correct one :smile:) is that self-awareness and all the other human "higher mental abilities" are the result of language and cultural evolution.
One thing I'd love you to discuss is why you're taking as granted that self-awarness is a higher mental ability.

One need to think that if one is to believe that your school of thought is the most likely to add interesting piece of evidence regarding awarness. Maybe if no one still managed to develop a robotic awarness that's because our present ideas are not enough to explain awarness. Especially, for your position the problem may well be that awarness is simply not explained by langage. My two cents :wink:
 
  • #14
ConradDJ
Gold Member
314
1
An entirely different school of thought (and the correct one :smile:) is that self-awareness and all the other human "higher mental abilities" are the result of language and cultural evolution. Words, as a new level of symbolic order and constraint, scaffold more elaborate forms of thinking.

I would agree that it’s useless to try to understand “consciousness” outside the context of language and human communication. The “sense of self” humans have is something we develop when we’re very young, by talking to ourselves. I don’t see any reason to think that someone who grew up having no contact with other humans would develop anything like the “sense of self” that we all take for granted.

Of course we’ll never know what such a person’s internal experience is like, if they can’t tell us about it. And they’ll never know anything about it themselves, if they can’t ask themselves about it.

Likewise it seems obvious to me that the “meta-representation” discussed by Ramachandran is built on language... although he writes as though language were just one of the many “more sophisticated responses” that our “second brain” somehow supports.

No doubt everything we do and experience is supported by our neural architecture. Ramachandran says, “as the human brain evolved further...” – but the thing is, once interpersonal communication begins to evolve, there’s a completely new, non-genetic channel for passing things on from one generation to the next... and a corresponding selective “pressure” to improve the communications channel and make it more robust. So our brains must have evolved through biological evolution to support this far more rapid kind of evolution through personal connection.

But whatever “higher” capacities our human brain has evolved, they can only function to the extent each of us learns to talk. We have a ”conscious self” to the extent we have a communicative relationship with ourselves, more or less like the relationships we have with others.
 
  • #15
115
2
First, to Lievo, possibly if you wish to better understand why apeiron (or anybody for that matter) "takes for granted" the idea that self-awareness is a higher mental function you could start with reading this paper:

http://www.marxists.org/archive/vygotsky/works/1925/consciousness.htm

I'm sure you will find that it elucidates various methodological problems with different approaches to psychology/ the problem of awareness and the beginning of the socially mediated approach to language and self-awareness.

Also, to Conrad DJ, just figured I would throw it out there out of fairness to Ramachandran that later on in the book, he says something quite similar to what you said "...evolution through personal connection" ...He speaks, speculatively, about portions of the brain responsible for the usage of tools in the environment and social-responses in apes having a random mutation, and what originally evolved for tool usage and primitive social response served as an exaptation to free us from more biologically determined responses over long time-scales and enabled us to transmit learned-practices horizontally through generations, and vertically on much smaller time scales.
That seems to me to be close to what you said.
 
  • #16
264
0
I don’t see any reason to think that someone who grew up having no contact with other humans would develop anything like the “sense of self” that
we all take for granted.
Conrad, was it to answer my question? So you think that no animal has a sense of self, right?

I'm sure you will find that it elucidates various methodological problems with different approaches to psychology/ the problem of awareness and the beginning of the socially mediated approach to language and self-awareness.
JDStupi, again this is to describe Vygo' position, not to answer in any way. Specifically, in the link you pointed he took for granted that:

all animal behaviour consists of two groups of reactions: innate or unconditional reflexes and acquired or conditional reactions. (...) what is fundamentally new in human behaviour (...) Whereas animals passively adapt to the environment, man actively adapts the environment to himself.
I agree this was 'obvious' at his time -a time where surgeons were thinking that the babies could not fell pain. I just don't see how one can still believe that -neither for the babies, for that matter. Don't you now think it's quite obvious many animals behave on their own, have intents, feelings, and thus must have a sense of self?
 
  • #17
243
0
For some reason, the self-representational theories of subjectivity appeals to me. I believe higher-order representations have a huge role in the origin of subjectivity (as proposed as Ramachandran and others). I am VERY interested in what you have to say regarding this:

"Very early in evolution the brain developed the ability to create first-order sensory representation of external objects that could elicit only a very limited number of reactions. For example a rat's brain has only a first-order representation of a cat - specifically, as a furry, moving thing to avoid reflexively.[...]​
What is a representation? It is a presentation (image, sound or otherwise)in the mind. So when there is a representation there is necessarily a consciousness.

So while i will agree that "the self" may be a representation and that it has changed over time to become more elaborate and complex, i do not think putting a representation at the basis of consciousness will help explain how consciousness can come from nonconscious matter that doesnt have the ability to represent anything. The bold bit in Ramachandrans text is the part that needs explaining.
 
  • #18
apeiron
Gold Member
2,013
1
One thing I'd love you to discuss is why you're taking as granted that self-awarness is a higher mental ability.
As per my original post, the reasons for holding this position are based on well established, if poorly publicised, research. Vygotsky is the single best source IMHO. But for anyone interested, I can supply a bookshelf of references.

As to an animal's sense of self, there is also of course a different level of sense of self that is a basic sense of emboddiment and awareness of body boundaries. Even an animal "knows" its tongue from its food, and so which bit to chew. The difference that language makes is the ability to think about such facts. An animal just "is" conscious. Humans can step back, scaffolded by words, to think about this then surprising fact.
 
  • #19
115
2
Don't you now think it's quite obvious many animals behave on their own, have intents, feelings, and thus must have a sense of self?
This is the part of your post, I believe, that most represents the source of disagreement between you and others. The problem, in my mind, is that quite possibly what is happening is that an argument is developing over different topics. It seems as though some are speaking about a specifically human sense of self, the one we are aquainted with and experience, and are speaking about the necessary conditions for the existence of our sense of self, not neccessarily any sense of self.
The problem is that we are prone to get into semantical disagreements over what constitutes a "self" at this level of knowledge because we are arguing over the borders of the concept of a "self". On one hand, when we introspect or look at our sense of self we find that our "I" is built upon a vast network of memories, associative connections, generalized concepts, a re-flective (to bend back upon) ability, and embedding within a specific socio-cultural milieu. On the other hand we see that animals possess "subjective" states such as pain, pleasure etc etc, which leads us to believe that animals too have an "I" The problem then is in what sense is the animal "I" and the human "I" the same in meaning or reference?
Another question that arises that points out different "self" conceptions is "Is awareness a sufficient condition for a sense of self?" and if not, what is?

So while i will agree that "the self" may be a representation and that it has changed over time to become more elaborate and complex, i do not think putting a representation at the basis of consciousness will help explain how consciousness can come from nonconscious matter that doesnt have the ability to represent anything. The bold bit in Ramachandrans text is the part that needs explaining
We must also be sure not to put words into Rama's mouth. I do not believe that Rama has any intent on saying that "All conscioussness arises from representation" for this would simply be explaining opium in terms of its soperific properties. He, I believe, is specifically concerned with the existence of the "higher" forms of human-consciousness. Of course, postulating that no animal has any sense of self what so ever and then humans had a discrete jump to a fully developed sense of self would render a "proper" explanation of awareness and consciousness impossible, because then it could only be explained ahistorically and not within a biological evolutionary context. That said, tracing the geneological development of "awareness", "self", and their interrelation through biological history is not a question for human psychology, and in order to answer such a question we must seek the roots much further down. In fact, the very existence of celullar communications in some sense has a "representational" character in that some specific process is taken to signal for some other regulation of the "behaviour" or homeostasis of the system. Anticipating something apeiron will most likely suggest, and he knows much more about this than I, this line of thought is similar to that of "biosemiotics" which is essentially what these boundary-straddling discussions of "self" and "consciousness" need.
The unanswered question is "At what point do semantical operations arise out of purely physical?" this question will be closely related to the question of the origin of the encoding of information in a DNA molecule (or some closely related antecedent molecule). It is difficult to think about the origins of such processes because we are prone to say "How did 'it' 'know' to encode information" which is a form of homunculus fallacy and circumventing this explanatory gap with a naturalistc explanation will be difficult.
 
  • #20
apeiron
Gold Member
2,013
1
Cop out from what? The hard problem of philosophy doesn't boil down to a lack of knowledge of neurobiology. The opinions you are attacking aren't the same as the QM-mysticism new age soul-stuff out there.
Well, in practice the hard problem is invoked in consciousness studies as justification for leaping to radical mechanisms because it is "obvious" that neurological detail cannot cut it. Chalmers, for example, floated the idea that awareness might be another fundamental property of matter like charge or mass. Panpsychism. And his arguments were used by QM mysterians like Hameroff to legitimate that whole school of crackpottery. So in the social history of this area, the hard problem argument has in fact been incredibly damaging in my view.

The hard problem exists because it is plain that simple reductionist modelling of the brain cannot give you a satisfactory theory of consciousness. There is indeed a fundamental problem because if you try to build from the neural circuits up, you end up having to say "and at the end of it all, consciousness emerges". There is no proper causal account. You just have this global property that is the epiphenomenal smoke above the factory.

So what's the correct next step? You shift from simple models of causality to complex ones. You learn about the systems view - the kind of thing biologists know all about because they have already been through the loop when it comes to "life" - that mysterious emergent property which once seemed a hard problem requiring some soulstuff, some vitalistic essence, to explain it fully.

Once you understand the nature of global constraints and downward causality, the hard problem evaporates IMHO. There is no such thing as an epiphenomenal global property in this view - a global state that can exist, yet serve no function. You logically now cannot have such a detached thing.

Of course there is still the objective description vs the subjective impression distinction. But that applies to all modelling because that is what modelling is about - objectifying our impressions.
 
  • #21
What is a representation? It is a presentation (image, sound or otherwise)in the mind. So when there is a representation there is necessarily a consciousness.

So while i will agree that "the self" may be a representation and that it has changed over time to become more elaborate and complex, i do not think putting a representation at the basis of consciousness will help explain how consciousness can come from nonconscious matter that doesnt have the ability to represent anything. The bold bit in Ramachandrans text is the part that needs explaining.
From my cognitive neuroscience textbook, my friend:

"Cognitive and neural systems are sometimes said to create representations of the world. Representations need not only concern physical properties of the world (e.g. sounds, colors) but may also relate to more abstract forms of knowledge (e.g. knowledge of the beliefs of other people, factual knowledge).

Cognitive psychologists may refer to a mental representation of, say, your grandmother, being accused in an information-processing model of face processing. However, it is important to distinguish this from its neural representation. There is unlikely to be a one-to-one relationship between a hypothetic mental representation and the response properties of single neurons. The outside world is not copied inside the head, neither literally nor metaphorically; rather, the response properties of neurons (and brain regions) correlate with certain real-world features. As such, the relationship between a mental representation and a neural one is unlikely to be straightforward. The electrophysiological method of single-cell recordings" - page 33 of The Student's Guide to Cognitive Neuroscience 2nd edition by Jamie Ward​

Lots of research has been conducted into the nature of representations. It is taken as axiomatic that representations exist. Proponents of embodied cognition (http://en.wikipedia.org/wiki/Embodied_cognition) would disagree, but I really, really dislike their views. Here is some more about the nature of representations from my textbook:

Rolls and Deco (2002) distinguish between three different types of representation that may be found at the neural level:

1. Local representation. All the information about a stimulus/event is carried in one of the neurons (as in a grandmother cell).
2. Fully distributed representation. All the information about a stimulus/event is carried in all neurons of a given population.
3. Sparse distributed representation. A distributed representation in which a small proportion of the neurons carry information about a stimulus/event. - page 35 of The Student's Guide to Cognitive 2nd edition by Jamie Ward​

What VS Ramachandron is claiming is that the manipulation of higher-order representation underlies our consciousness. He refers to this as metarepresentation. Here is some more information regarding his position:

V.S. Ramachandran hypothesizes that a new set of brain structures evolved in the course of hominization in order to transform the outputs of the primary sensory areas into what he calls “metarepresentations”. In other words, instead of producing simple sensory representations, the brain began to create “representations of representations” that ultimately made symbolic thought possible, and this enhanced form made sensory information easier to manipulate, in particular for purposes of language.

One of the brain structures involved in creating these metarepresentations would be the inferior parietal lobe, which is one of the youngest regions in the brain, in terms of evolution. In humans, this lobe is divided into the angular gyrus and the supramarginal gyrus, and both of these structures are fairly large. Just beside them is Wernicke’s area, which is unique to human beings and is associated with understanding of language.

According to Ramachandran, the interaction among Wernicke’s area, the inferior parietal lobe (especially in the right hemisphere), and the anterior cingulate cortex is fundamental for generating metarepresentations from sensory representations, thus giving rise to qualia and to the sense of a “self” that experiences these qualia.
http://thebrain.mcgill.ca/flash/a/a_12/a_12_cr/a_12_cr_con/a_12_cr_con.html
 
  • #22
243
0
Lots of research has been conducted into the nature of representations. It is taken as axiomatic that representations exist. Proponents of embodied cognition (http://en.wikipedia.org/wiki/Embodied_cognition) would disagree, but I really, really dislike their views. Here is some more about the nature of representations from my textbook:
I dont deny that representations exist, but i do not think they exist outside of consciousness. Physically, neurons (or other physical objects) just consist of atoms (and the elementary particles that they consist of) and those in themselves do not represent something, unless it is in the context of a mind utilising them, as with those 3 types of representations you quote. The neurons or single cells are located within the brain and so they are spoken of as if they supply information to it and thus the mind.

What VS Ramachandron is claiming is that the manipulation of higher-order representation underlies our consciousness. He refers to this as metarepresentation. Here is some more information regarding his position:

V.S. Ramachandran hypothesizes that a new set of brain structures evolved in the course of hominization in order to transform the outputs of the primary sensory areas into what he calls “metarepresentations”. In other words, instead of producing simple sensory representations, the brain began to create “representations of representations” that ultimately made symbolic thought possible, and this enhanced form made sensory information easier to manipulate, in particular for purposes of language.

One of the brain structures involved in creating these metarepresentations would be the inferior parietal lobe, which is one of the youngest regions in the brain, in terms of evolution. In humans, this lobe is divided into the angular gyrus and the supramarginal gyrus, and both of these structures are fairly large. Just beside them is Wernicke’s area, which is unique to human beings and is associated with understanding of language.

According to Ramachandran, the interaction among Wernicke’s area, the inferior parietal lobe (especially in the right hemisphere), and the anterior cingulate cortex is fundamental for generating metarepresentations from sensory representations, thus giving rise to qualia and to the sense of a “self” that experiences these qualia.
http://thebrain.mcgill.ca/flash/a/a_12/a_12_cr/a_12_cr_con/a_12_cr_con.html
It looks like he's talking about the evolution of representations (consciousness), how they get more complex, when there is already an initial representation to work with. Id like to know where that initial representation came from.
 
  • #23
264
0
I can supply a bookshelf of references.
Sure, but will it be relevant to this thread? I don't think so, but will open a new thread. :wink:

It seems as though some are speaking about a specifically human sense of self, the one we are aquainted with and experience, and are speaking about the necessary conditions for the existence of our sense of self, not neccessarily any sense of self.
Well said. So the question: is the hard problem hard because of what is making our sense of self specific to humans, or because of the part shared with many animals?

An animal just "is" conscious.
This is directly and specifically relevant to the thread. The problem with any approach putting to much emphasized on langage is that all what make the consciousness a hard problem is already present when we're looking at this 'just' conscious phenomena. That's why I don't see how Vygostky can be relevant to this question. Not to be rude in anyway, nor to say it's not interesting for some purposes. I'm just saying langage is not the good place to start with when one is interested in the hard question of subjectivity.

the hard problem argument has in fact been incredibly damaging in my view. (...) Once you understand the nature of global constraints and downward causality, the hard problem evaporates IMHO.
Well if you dismiss the hard problem I can understand why you didn't get my point. I don't think you should dismiss the existence of the hard problem based of some misguided attempts to solve it, no more than Newton's attempts with alchemestry should dimiss his idea about gravitation.

Of course there is still the objective description vs the subjective impression distinction. But that applies to all modelling because that is what modelling is about - objectifying our impressions.
Deeper than that, IMHO. For example, how can we know that someone likely in coma is or is not conscious? In retrospect we know some have been misclassified because we found a way to communicate with them. But will we one day be able to say that someone is NOT conscious 'just' by looking his brain activity?
 
  • #24
I dont deny that representations exist, but i do not think they exist outside of consciousness. Physically, neurons (or other physical objects) just consist of atoms (and the elementary particles that they consist of) and those in themselves do not represent something, unless it is in the context of a mind utilising them, as with those 3 types of representations you quote. The neurons or single cells are located within the brain and so they are spoken of as if they supply information to it and thus the mind.
Yeah, I adopt the stance of identity theory:

the theory simply states that when we experience something - e.g. pain - this is exactly reflected by a corresponding neurological state in the brain (such as the interaction of certain neurons, axons, etc.). From this point of view, your mind is your brain - they are identical" http://www.philosophyonline.co.uk/pom/pom_identity_what.htm

A neural representation is a mental representation.

It looks like he's talking about the evolution of representations (consciousness), how they get more complex...
No, he is claiming it is with the manipulation of metarepresentations ('representations of representations') that we engage in consciousness. Many other other species have representational capacities, but not to the extent of humans.
 
Last edited:
  • #25
apeiron
Gold Member
2,013
1
Sure, but will it be relevant to this thread? I don't think so, but will open a new thread. :wink:

Well said. So the question: is the hard problem hard because of what is making our sense of self specific to humans, or because of the part shared with many animals?

This is directly and specifically relevant to the thread. The problem with any approach putting to much emphasized on langage is that all what make the consciousness a hard problem is already present when we're looking at this 'just' conscious phenomena. That's why I don't see how Vygostky can be relevant to this question. Not to be rude in anyway, nor to say it's not interesting for some purposes. I'm just saying langage is not the good place to start with when one is interested in the hard question of subjectivity.

Well if you dismiss the hard problem I can understand why you didn't get my point. I don't think you should dismiss the existence of the hard problem based of some misguided attempts to solve it, no more than Newton's attempts with alchemestry should dimiss his idea about gravitation.

Deeper than that, IMHO. For example, how can we know that someone likely in coma is or is not conscious? In retrospect we know some have been misclassified because we found a way to communicate with them. But will we one day be able to say that someone is NOT conscious 'just' by looking his brain activity?
Generally here you make a bunch of expressions of doubt, which require zero intellectual effort. I, on the other hand, have made positive and specific claims in regards of the OP (which may have been rambling, but is entitled "What or where is our real sense of self?")

So what I have said is start by being clear about what "conciousness" really is. The hard part is in fact just the structure of awareness. The human socially evolved aspects are almost trivially simple. But you have to strip them away to get down to the real questions.

Now on to your claim that my attempts to solve or bypass the hard problem by an appeal to systems causality are misguided. Care to explain why? Or should I just take your word on the issue :zzz:. Sources would be appreciated.
 

Related Threads on What or where is our real sense of self?

  • Last Post
Replies
1
Views
2K
Replies
51
Views
5K
  • Last Post
3
Replies
60
Views
4K
  • Last Post
Replies
19
Views
3K
  • Last Post
Replies
16
Views
3K
Replies
16
Views
2K
Replies
20
Views
2K
  • Last Post
2
Replies
37
Views
24K
Replies
3
Views
1K
  • Last Post
Replies
7
Views
2K
Top