What or where is our real sense of self?

  • Thread starter Metarepresent
  • Start date
  • Tags
    Self
In summary, there is no general consensus among experts in philosophy or cognitive neuroscience about the concept of the self. Questions about consciousness and the self have been ongoing, and the acquisition of knowledge is necessary to form a cohesive theory. Some essential questions that must be answered include the role of neural activity in consciousness, the nature of subjectivity, and how the self emerges. There are also issues that all theories of consciousness must address, such as binding, qualia, and the Cartesian Theatre. The theory of self-representation is appealing to some, as it posits that higher-order representations play a significant role in the origin of subjectivity. This is supported by the idea that metarepresentations, created by a second brain, allow humans to consciously
  • #1
Metarepresent
3
0
No one in philosophy or cognitive neuroscience has come to a general consensus about the “self”. Questions about the “self” and “consciousness” have been pestering me awhile. I believe I need more knowledge before I can truly adopt a position; hence the purpose of this message being the acquisition of knowledge. I am currently studying a Cognitive Neuroscience Textbook with a friend (he and I are both undergrad Biology majors), and we have formed a blog documenting our progress. Granted, I realize this is a question needs aid from all fields, especially philosophy.

First, let’s discuss the essential questions that must be answered in order to formulate a cohesive theory of self:

Does consciousness emerge from neural activity alone? Why is there always someone having the experience? Who is the feeler of your feelings and the dreamer of your dreams? Who is the agent doing the doing, and what is the entity thinking your thoughts? Why is your conscious reality your conscious reality? Why is consciousness subjective? Why does our perceived reality almost invariably have a center: an experiencing self? How exactly, then, does subjectivity, this “I”, emerge? Is the self an operation rather than a thing or repository? How to comprehend subjectivity is the deepest puzzle in consciousness research. The most important of all questions is how do neurons encode meaning and evoke all the semantic associations of an object?

Also, before continuing this long diatribe: what are the best phenomenal characteristics we may attribute to consciousness? I believe unity, recursive processing style and egocentric perspective are the best phenomenal target properties attributed to the ‘self’. Furthermore, there are still issues all theories of consciousness must address. These include, but are not limited to binding (i.g., how the property of coherence arises in consciousness - how are the processing-domains in the brain distributed to allow this? Wolf Singer claims that the synchronization of oscillatory activity may be the mechanism for the binding of distributed brain processes), QUALIA, Cartesian Theatre (i.g., how can we create a theory of consciousness without falling into Dennett’s dualism trap), and etc.

For some reason, the self-representational theories of subjectivity appeals to me. I believe higher-order representations have a huge role in the origin of subjectivity (as proposed as Ramachandran and others). I am VERY interested in what you have to say regarding this:

"Very early in evolution the brain developed the ability to create first-order sensory representation of external objects that could elicit only a very limited number of reactions. For example a rat's brain has only a first-order representation of a cat - specifically, as a furry, moving thing to avoid reflexively. But as the human brain evolved further, there emerged a second brain - a set of nerve connections, to be exact - that was in a sense parasitic on the old one. This second brain creates metarepresentations (representations of representations – a higher order of abstraction) by processing the information from the first brain into manageable chunks that can be used for a wider repertoire of more sophisticated responses, including language and symbolic thought. This is why, instead of just “the furry enemy” that it for the rat, the cat appears to you as a mammal, a predator, a pet, an enemy of dogs and rats, a thing that has ears, whiskers, a long tail, and a meow; it even reminds you of Halle Berry in a latex suit. It also has a name, “cat,” symbolizing the whole cloud of associations. In sort, the second brain imbues an object with meaning, creating a metarepresentation that allows you to be consciously aware of a cat in a way that the rat isn’t.

Metarepresentations are also a prerequisite for our values, beliefs, and priorities. For example, a first-order representation of disgust is a visceral “avoid it” reaction, while a metarepresentation would include, among other things, the social disgust you feel toward something you consider morally wrong or etically inappropriate. Such higher-order representations can be juggled around in your mind in a manner that is unique to humans. They are linked to our sense of self and enable us to find meaning in the outside world – both material and social – and allow us to define ourselves in relation to it. For example, I can say, “I find her attitude toward emptying the cat litter box disgusting.”
- The Tell-Tale Brain by VS Ramachandran page 247​

It is with the manipulation of meta-representations, according to Ramachandran, that we engage in human consciousness as we know it. Antonio Damasio claimed, “Our evolved type of conscious self-model is unique to the human brain, that by representing the process of representation itself, we can catch ourselves – as Antonio Damasio would call it – in the act of knowing”. I also read many other plausible self-representational theories of consciousness. For example, I find this one very convincing, due to how it places the notion of a recursive consciousness into an evolutionary paradigm:

http://www.wwwconsciousness.com/Consciousness_PDF.pdf [Broken]
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
An entirely different school of thought (and the correct one :smile:) is that self-awareness and all the other human "higher mental abilities" are the result of language and cultural evolution. Words, as a new level of symbolic order and constraint, scaffold more elaborate forms of thinking.

Lev Vygotsky (the sociocultural school) and to a lesser extent GH Mead (symbolic interactionism) are the best sources here. The idea regularly gets rediscovered, as by philosopher Andy Clark.

So the meta-thinking is due to social evolution (call it memetic if you like) rather than biological (genetic). This is why neuroscience does not find "higher faculties" in the brain's architecture. Even though it has really, really, tried to.
 
  • #3
If a single human somehow developed without any stimulus from the environment, it would seem common sense that it has no thought processes. no self.

What neuroscience can do is try to find the means by which the brain is able to engage in social activity and take advantage of it; how it stores the information, changes it, and contributes to the information ensemble that is society.

For instance, spatial metaphors are associated with spatial and parietal lobes as ways of "perceiving" things that aren't directly detectable by our senses. In the most abstract sense, we use visualization in science to transform measurable variables into perceivable spatial objects (plots or graphs). But we also use a lot of spatial metaphors to describe emotional situations and even the self ("within", "inside") as compared to the "outside" world.

Linguistic relativity definitely plays a role in this; societies (even nonhuman societies) have physically evolved to where they can organize a body of knowledge (stored in language) to augment the simpler signaling protein language of cells. I speculate that human language is possibly the most sophisticated of the animals.
 
  • #4
Hi Metarepresent. Welcome to the board.
Metarepresent said:
No one in philosophy or cognitive neuroscience has come to a general consensus about the “self”.
You said a mouthful right there. You’ll get different opinions from everyone, and more often than not, people will talk right past each other, not knowing or not understanding what the other person is talking about, so getting a solid grounding in some of the concepts before we refer to them is important.

When we talk about consiousness, there are various phenomena that we might be referring to. One person might be talking about the objective aspects of consiousness (psychological consciousness) but others might be thinking about subjective aspects of conscoiusness also known as phenomenal consciousness. The two are very different though some folks can’t or won’t differentiate between the two.

In his book "The Conscious Mind", Chalmers talks about these 2 different concepts of mind, the phenomenal and the psychological. The phenomenal aspect of mind "is characterized by the way it feels ... ". In other words, the phenomenal qualities of mind include those phenomena that form our experiences or how experience feels. The psychological aspects of mind in comparison "is characterized by what it does." The psychological aspects of mind are those things that are objectively measurable such as behavior or the interactions of neurons.

Consider phenomenal consciousness to be a set of phenomena such as the experience of the color red, how sugar tastes, what pain feels like, what hot or cold feels like, what love and anger feel like, etc ... are all phenomenal qualities of mind. So when we refer to phenomenal consciousness, we're referring to that aspect of mind that is characterized by our subjective experiences.

In comparison, we can talk about psychological consciousness which again is a set of phenomena. Psychological consciousness includes that which is objectively measurable such as behavior. We might think phenomenal consciousness and psychological consciousness are one in the same, but they pick out separate and distinct phenomena. One is subjective, the other is objective.

Chalmers states, "It seems reasonable to say that together, the psychological and the phenomenal exhaust the mental. That is, every mental property is either a phenomenal property, a psychological property, or some combination of the two. Certainly, if we are concerned with those manifest properties of the mind that cry out for explanation, we find first, the varieties of conscious experience and second, the causation of behavior. There is no third kind of manifest explanandum, and the first two sources of evidence - experience and behavior - provide no reason to believe in any third kind of nonphenomenal, nonfunctional properties ..."

So if you’re referring to how it is we experience anything, you are referring to phenomenal consciousness. If you refer to how neurons interact or how our behavior is affected by something, you are referring to psychological consciousness. The difference between the two and why phenomenal consciouness should exist at all is often what is referred to as the explanatory gap. We can explain why neurons interact in a given way but why they should give rise to phenomenal consciousness isn’t explained by explaining how neurons interact.

Metarepresent said:
Does consciousness emerge from neural activity alone?
The standard paradim of mind is that consciousness is emergent from the interaction of neurons. That isn’t to say that everyone agrees this is true, but that is what is generally taken as true. It gives rise to a few very serious problems and paradoxes though, so some folks have looked to see where those logical inconsistancies have arisen and how one might reconsile them. So far, I don’t see any consensus on how to reconsile these very serious issues, so logically it would seem to me, the standard theory of consciousness doesn’t provide an adequate explanation.

Metarepresent said:
Why is there always someone having the experience? Who is the feeler of your feelings and the dreamer of your dreams? Who is the agent doing the doing, and what is the entity thinking your thoughts?
This line of reasoning requires a religious type of duality, as if there were some kind of “soul” that is separate and distinct from the brain, the neurons and the bits and pieces that make up a person. But there's no need to consider this soul. Consider only that there are various phenomena created by the brain, some of which are subjective and can't be objectively measured. That's enough to explain the sense of self. The sense of self is a phenomena created by the brain, a feeling that associates the phenomenal experience with the body that is having the experience. Note that the term “duality” in philosophy can have another, very different meaning however, which has nothing to do with religion. That’s a separate issue entirely.

Metarepresent said:
The most important of all questions is how do neurons encode meaning and evoke all the semantic associations of an object?
I’d mentioned earlier there are some very serious logical inconsistancies in the present theory of how phenomenal consciousness emerges from the interaction of neurons. This is one of them. Stevan Harnad calls this the “symbol grounding problem”. Basically, computational aspects of mind that require the emergence of consciousness from the interaction of neurons are considered to be “symbol manipulation systems” and as such, there is no grounding of the symbols to anything intrinsic to nature.

Regarding the reference to Ramachandran, one can ask whether or not his discussion of self-representation is a discussion about the psychological aspect of consciousness (ie: the objectively measurable one) or the phenomenal aspect. We can talk about how neurons interact, how additional neurons such as mirror neurons interact, and the resulting behavior of an animal or person, but it gets us no closer to closing the explanatory gap. Why should there be phenomenal experiences associated with any of those neuron interactions? Without explaining why these additional phenomena should arise, we haven't explained phenomenal consciousness.
 
  • #5
Consider the idea of the philosophical zombie, which you probably have heard of. It seems to me that any objective theory of consciousness cannot distinguish between such zombies and human beings with subjective experience. I agree with Q-guest, it is an important distinction between the objective phenomenon of consciousness and the subjective quality to it, and it seems that the latter cannot be touched upon in any objective theory. One can imagine the existence of such zombies, and in my opinion you cannot infer that some particular individual is not such a being.
 
  • #6
Q_Goest said:
In comparison, we can talk about psychological consciousness which again is a set of phenomena. Psychological consciousness includes that which is objectively measurable such as behavior. We might think phenomenal consciousness and psychological consciousness are one in the same, but they pick out separate and distinct phenomena. One is subjective, the other is objective.

Yes, and so one is also folk lore, the other is science. I vote for the science.

And what science can tell us, for example, is that introspection is learned and socially constructed. There is no biological faculty of "introspective awareness". It is a skill you have to learn. And what most people end up "seeing" is what cultural expectations lead them to see.

A classic example is dreams. Very few people learn to accurately introspect on the phenomenal nature of dreams.

The hard problem of consciousness really boils down to the fact that people find it too hard to study all the neurology, psychology, anthropology and systems science involved in being up to date with what is known.

It is so much easier to be a Chalmers and say all that junk is irrelevant, he has no need to learn it, because there will always be a phenomenological mystery. Lots of people love that line because it justifies a belief in soulstuff or QM coherence magic. It is the easy cop-out.

There is also a philosophy of science issue being trampled over here.

The point of scientific models is to provide objective descriptions based on "fundamentals" or universals. If you are modelling atoms or life, you are not trying to provide a subjective impression (ie: particular and local as opposed to general and global) of "what it is like" to be an atom, or a living creature.

So a clear divide between objective description and subjective experience is what we are aiming for rather than "a hard problem". We are trying to get as far away from a subjective stance as possible, so as to be maximally objective (see Nozick's Invariances for example).

Having established an "objective theory of mind" we should then of course be able to return to the subjective in some fashion.

It would be nice to be able to build "conscious" machines based on the principles discovered (for instance, Grossberg's ART neural nets or Friston's Bayesian systems). It would be nice to have a feeling of being able to understand why the many aspects of consciousness are what the are, and not otherwise (as for example why you can have blackish green, red and blue - forest green, scarlet, navy - but not blackish yellow).

But it takes a level of study that most people are not willing to undertake to appreciate exactly how much we do already know, and how far we might still have to go.
 
  • #7
apeiron said:
Yes, and so one is also folk lore, the other is science. I vote for the science.

...

The hard problem of consciousness really boils down to the fact that people find it too hard to study all the neurology, psychology, anthropology and systems science involved in being up to date with what is known.

It is so much easier to be a Chalmers and say all that junk is irrelevant, he has no need to learn it, because there will always be a phenomenological mystery. Lots of people love that line because it justifies a belief in soulstuff or QM coherence magic. It is the easy cop-out.

There is also a philosophy of science issue being trampled over here.

...

But it takes a level of study that most people are not willing to undertake to appreciate exactly how much we do already know, and how far we might still have to go.
Ok, so… psychological consciousness is science and phenomenal consciousness is folklore? And you vote for science. And people like Chalmers say a lot of junk because he’s copping out? And the science issue is being trampled and the level of study that you have is magnificent and other folks just aren’t up to your standards. Is that about right? That’s quite the attitude you have.

Do you know what browbeat means?
brow•beat –verb (used with object), -beat, -beat•en, -beat•ing.
to intimidate by overbearing looks or words; bully: They browbeat him into agreeing
I’m sure you’re much smarter than me or anyone else here, but how about you cut the sh*t and stop acting like a bully on the school playground.

apeiron said:
But there seems less point jumping up and down in a philosophy sub-forum about people wanting to do too much philosophy for your tastes. We get that you can get by in your daily work without raising wider questions.
https://www.physicsforums.com/showthread.php?t=451342&+forum&page=4

apeiron said:
To me this is demonstrating the inflexibility of thought I am talking about. Me clever scientist, you dopey philosopher. Ugh. End of story.
https://www.physicsforums.com/showthread.php?t=302413&+forum&page=2

Ok, so wait… if you are the clever scientist and I’m the dopey philosopher, then why did you say that ZapperZ was the clever scientist and you were the dopey philosopher. And why is this philosophy forum too much philosophy for you when you were arguing that you wanted to do philosophy in the philosophy forum? I’m SOOOOOOO confused.

I’m sure you have a rational explanation for all this and you won’t browbeat people like you always do. So, why haven’t you joined in the discussion about improving the philosophy forum in the science advisors forum? I haven’t seen anything from you there. It’s like you don’t seem to care about the philosophy forum or where it’s going or the new rules or anything. You really should discuss this with us. It’s up at the top of the main page under “science advisors forum”.
 
  • #8
aperion said:
It is the easy cop-out.

Cop out from what? The hard problem of philosophy doesn't boil down to a lack of knowledge of neurobiology. The opinions you are attacking aren't the same as the QM-mysticism new age soul-stuff out there. They are not objecting to, or "trampling on", the science behind neurology and how it relates to consciousness, and this is not being degraded for what it is on its domain. One is simply emphasizing the important distinction between this and what we call subjective experience, which again and again seems to be interpreted as a counter-argument against, or cop-out to, scientific endeavor. Emphasizing that, and drawing the line between the subjective nature of consciousness and what actually can be concluded from objective measurable results.
 
Last edited:
  • #9
Q_Goest said:
Ok, so wait… if you are the clever scientist and I’m the dopey philosopher, then why did you say that ZapperZ was the clever scientist and you were the dopey philosopher. And why is this philosophy forum too much philosophy for you when you were arguing that you wanted to do philosophy in the philosophy forum? I’m SOOOOOOO confused.

But my position was explained quite clearly in that thread. Did you not read it? Here it is again...

To me this is demonstrating the inflexibility of thought I am talking about. Me clever scientist, you dopey philosopher. Ugh. End of story.

You are basing your whole point of view on the assumption that scientists and philosophers have ways of modelling that are fundamentally different. Yet I'm not hearing any interesting evidence as to the claimed nature of the difference.

I am putting forward the alternative view that modelling is modelling, and there are different levels of abstraction - it is a hierarchy of modelling. Down the bottom, you have science as technology, up the top, you have science as philosophy. And neither is better. Both have their uses.

And I've frequently made these points to explain my general position on "philosophy"...

1) Classical philosophy is instructive: it shows the origin of our metaphysical prejudices - why we believe what we believe. And those guys did a really good job first time round.

2) Modern academic philosophy is mostly shallow and a waste of time. If you want the best "philosophical" thinking, then I believe you find it within science - in particular, theoretical biology when it comes to questions of ethics, complexity, life, mind and meaning.

3) Unfortunately the level of philosophical thought within mind science - neuro and psych - is not sophisticated as in biology. Largely this is because neuro arises out of medical training and psych out of computational models. However give the field another 20 years and who knows?

4) I have a personal interest in the history of systems approaches within philosophy, so that means Anaximander, Aristotle, Hegel, Peirce, etc.

5) I've had personal dealings with Chalmers and others, so that informs my opinions.

Q_Goest said:
I’m sure you have a rational explanation for all this and you won’t browbeat people like you always do. So, why haven’t you joined in the discussion about improving the philosophy forum in the science advisors forum? I haven’t seen anything from you there. It’s like you don’t seem to care about the philosophy forum or where it’s going or the new rules or anything. You really should discuss this with us. It’s up at the top of the main page under “science advisors forum”.

I really can't see any link to such a forum. And I certainly was not invited to take part :smile:.

As to where the philosophy forum should go, it might be nice if it discussed physics a bit more. I wouldn't mind if topics like freewill and consciousness were banned because there just isn't the depth of basic scientific knowledge available to constrain the conversations.
 
  • #10
The left thinks you're right, the right thinks your left. Nobody likes a moderate.
 
  • #11
  • #12
Apeiron, the tone of your last post was much improved. I’ve already made the point that some folks can’t or won’t differentiate between psychological consciousness and phenomenal consciousness. I won’t hijack this thread to push my views and I’d ask you attempt to do the same.
 
  • #13
apeiron said:
An entirely different school of thought (and the correct one :smile:) is that self-awareness and all the other human "higher mental abilities" are the result of language and cultural evolution.
One thing I'd love you to discuss is why you're taking as granted that self-awarness is a higher mental ability.

One need to think that if one is to believe that your school of thought is the most likely to add interesting piece of evidence regarding awarness. Maybe if no one still managed to develop a robotic awarness that's because our present ideas are not enough to explain awarness. Especially, for your position the problem may well be that awarness is simply not explained by langage. My two cents :wink:
 
  • #14
apeiron said:
An entirely different school of thought (and the correct one :smile:) is that self-awareness and all the other human "higher mental abilities" are the result of language and cultural evolution. Words, as a new level of symbolic order and constraint, scaffold more elaborate forms of thinking.


I would agree that it’s useless to try to understand “consciousness” outside the context of language and human communication. The “sense of self” humans have is something we develop when we’re very young, by talking to ourselves. I don’t see any reason to think that someone who grew up having no contact with other humans would develop anything like the “sense of self” that we all take for granted.

Of course we’ll never know what such a person’s internal experience is like, if they can’t tell us about it. And they’ll never know anything about it themselves, if they can’t ask themselves about it.

Likewise it seems obvious to me that the “meta-representation” discussed by Ramachandran is built on language... although he writes as though language were just one of the many “more sophisticated responses” that our “second brain” somehow supports.

No doubt everything we do and experience is supported by our neural architecture. Ramachandran says, “as the human brain evolved further...” – but the thing is, once interpersonal communication begins to evolve, there’s a completely new, non-genetic channel for passing things on from one generation to the next... and a corresponding selective “pressure” to improve the communications channel and make it more robust. So our brains must have evolved through biological evolution to support this far more rapid kind of evolution through personal connection.

But whatever “higher” capacities our human brain has evolved, they can only function to the extent each of us learns to talk. We have a ”conscious self” to the extent we have a communicative relationship with ourselves, more or less like the relationships we have with others.
 
  • #15
First, to Lievo, possibly if you wish to better understand why apeiron (or anybody for that matter) "takes for granted" the idea that self-awareness is a higher mental function you could start with reading this paper:

http://www.marxists.org/archive/vygotsky/works/1925/consciousness.htm

I'm sure you will find that it elucidates various methodological problems with different approaches to psychology/ the problem of awareness and the beginning of the socially mediated approach to language and self-awareness.

Also, to Conrad DJ, just figured I would throw it out there out of fairness to Ramachandran that later on in the book, he says something quite similar to what you said "...evolution through personal connection" ...He speaks, speculatively, about portions of the brain responsible for the usage of tools in the environment and social-responses in apes having a random mutation, and what originally evolved for tool usage and primitive social response served as an exaptation to free us from more biologically determined responses over long time-scales and enabled us to transmit learned-practices horizontally through generations, and vertically on much smaller time scales.
That seems to me to be close to what you said.
 
  • #16
ConradDJ said:
I don’t see any reason to think that someone who grew up having no contact with other humans would develop anything like the “sense of self” that
we all take for granted.
Conrad, was it to answer my question? So you think that no animal has a sense of self, right?

JDStupi said:
I'm sure you will find that it elucidates various methodological problems with different approaches to psychology/ the problem of awareness and the beginning of the socially mediated approach to language and self-awareness.
JDStupi, again this is to describe Vygo' position, not to answer in any way. Specifically, in the link you pointed he took for granted that:

all animal behaviour consists of two groups of reactions: innate or unconditional reflexes and acquired or conditional reactions. (...) what is fundamentally new in human behaviour (...) Whereas animals passively adapt to the environment, man actively adapts the environment to himself.

I agree this was 'obvious' at his time -a time where surgeons were thinking that the babies could not fell pain. I just don't see how one can still believe that -neither for the babies, for that matter. Don't you now think it's quite obvious many animals behave on their own, have intents, feelings, and thus must have a sense of self?
 
  • #17
Metarepresent said:
For some reason, the self-representational theories of subjectivity appeals to me. I believe higher-order representations have a huge role in the origin of subjectivity (as proposed as Ramachandran and others). I am VERY interested in what you have to say regarding this:

"Very early in evolution the brain developed the ability to create first-order sensory representation of external objects that could elicit only a very limited number of reactions. For example a rat's brain has only a first-order representation of a cat - specifically, as a furry, moving thing to avoid reflexively.[...]​
What is a representation? It is a presentation (image, sound or otherwise)in the mind. So when there is a representation there is necessarily a consciousness.

So while i will agree that "the self" may be a representation and that it has changed over time to become more elaborate and complex, i do not think putting a representation at the basis of consciousness will help explain how consciousness can come from nonconscious matter that doesn't have the ability to represent anything. The bold bit in Ramachandrans text is the part that needs explaining.
 
  • #18
Lievo said:
One thing I'd love you to discuss is why you're taking as granted that self-awarness is a higher mental ability.

As per my original post, the reasons for holding this position are based on well established, if poorly publicised, research. Vygotsky is the single best source IMHO. But for anyone interested, I can supply a bookshelf of references.

As to an animal's sense of self, there is also of course a different level of sense of self that is a basic sense of emboddiment and awareness of body boundaries. Even an animal "knows" its tongue from its food, and so which bit to chew. The difference that language makes is the ability to think about such facts. An animal just "is" conscious. Humans can step back, scaffolded by words, to think about this then surprising fact.
 
  • #19
Lievo said:
Don't you now think it's quite obvious many animals behave on their own, have intents, feelings, and thus must have a sense of self?

This is the part of your post, I believe, that most represents the source of disagreement between you and others. The problem, in my mind, is that quite possibly what is happening is that an argument is developing over different topics. It seems as though some are speaking about a specifically human sense of self, the one we are aquainted with and experience, and are speaking about the necessary conditions for the existence of our sense of self, not neccessarily any sense of self.
The problem is that we are prone to get into semantical disagreements over what constitutes a "self" at this level of knowledge because we are arguing over the borders of the concept of a "self". On one hand, when we introspect or look at our sense of self we find that our "I" is built upon a vast network of memories, associative connections, generalized concepts, a re-flective (to bend back upon) ability, and embedding within a specific socio-cultural milieu. On the other hand we see that animals possesses "subjective" states such as pain, pleasure etc etc, which leads us to believe that animals too have an "I" The problem then is in what sense is the animal "I" and the human "I" the same in meaning or reference?
Another question that arises that points out different "self" conceptions is "Is awareness a sufficient condition for a sense of self?" and if not, what is?

So while i will agree that "the self" may be a representation and that it has changed over time to become more elaborate and complex, i do not think putting a representation at the basis of consciousness will help explain how consciousness can come from nonconscious matter that doesn't have the ability to represent anything. The bold bit in Ramachandrans text is the part that needs explaining

We must also be sure not to put words into Rama's mouth. I do not believe that Rama has any intent on saying that "All conscioussness arises from representation" for this would simply be explaining opium in terms of its soperific properties. He, I believe, is specifically concerned with the existence of the "higher" forms of human-consciousness. Of course, postulating that no animal has any sense of self what so ever and then humans had a discrete jump to a fully developed sense of self would render a "proper" explanation of awareness and consciousness impossible, because then it could only be explained ahistorically and not within a biological evolutionary context. That said, tracing the geneological development of "awareness", "self", and their interrelation through biological history is not a question for human psychology, and in order to answer such a question we must seek the roots much further down. In fact, the very existence of celullar communications in some sense has a "representational" character in that some specific process is taken to signal for some other regulation of the "behaviour" or homeostasis of the system. Anticipating something apeiron will most likely suggest, and he knows much more about this than I, this line of thought is similar to that of "biosemiotics" which is essentially what these boundary-straddling discussions of "self" and "consciousness" need.
The unanswered question is "At what point do semantical operations arise out of purely physical?" this question will be closely related to the question of the origin of the encoding of information in a DNA molecule (or some closely related antecedent molecule). It is difficult to think about the origins of such processes because we are prone to say "How did 'it' 'know' to encode information" which is a form of homunculus fallacy and circumventing this explanatory gap with a naturalistc explanation will be difficult.
 
  • #20
Jarle said:
Cop out from what? The hard problem of philosophy doesn't boil down to a lack of knowledge of neurobiology. The opinions you are attacking aren't the same as the QM-mysticism new age soul-stuff out there.

Well, in practice the hard problem is invoked in consciousness studies as justification for leaping to radical mechanisms because it is "obvious" that neurological detail cannot cut it. Chalmers, for example, floated the idea that awareness might be another fundamental property of matter like charge or mass. Panpsychism. And his arguments were used by QM mysterians like Hameroff to legitimate that whole school of crackpottery. So in the social history of this area, the hard problem argument has in fact been incredibly damaging in my view.

The hard problem exists because it is plain that simple reductionist modelling of the brain cannot give you a satisfactory theory of consciousness. There is indeed a fundamental problem because if you try to build from the neural circuits up, you end up having to say "and at the end of it all, consciousness emerges". There is no proper causal account. You just have this global property that is the epiphenomenal smoke above the factory.

So what's the correct next step? You shift from simple models of causality to complex ones. You learn about the systems view - the kind of thing biologists know all about because they have already been through the loop when it comes to "life" - that mysterious emergent property which once seemed a hard problem requiring some soulstuff, some vitalistic essence, to explain it fully.

Once you understand the nature of global constraints and downward causality, the hard problem evaporates IMHO. There is no such thing as an epiphenomenal global property in this view - a global state that can exist, yet serve no function. You logically now cannot have such a detached thing.

Of course there is still the objective description vs the subjective impression distinction. But that applies to all modelling because that is what modelling is about - objectifying our impressions.
 
  • #21
pftest said:
What is a representation? It is a presentation (image, sound or otherwise)in the mind. So when there is a representation there is necessarily a consciousness.

So while i will agree that "the self" may be a representation and that it has changed over time to become more elaborate and complex, i do not think putting a representation at the basis of consciousness will help explain how consciousness can come from nonconscious matter that doesn't have the ability to represent anything. The bold bit in Ramachandrans text is the part that needs explaining.

From my cognitive neuroscience textbook, my friend:

"Cognitive and neural systems are sometimes said to create representations of the world. Representations need not only concern physical properties of the world (e.g. sounds, colors) but may also relate to more abstract forms of knowledge (e.g. knowledge of the beliefs of other people, factual knowledge).

Cognitive psychologists may refer to a mental representation of, say, your grandmother, being accused in an information-processing model of face processing. However, it is important to distinguish this from its neural representation. There is unlikely to be a one-to-one relationship between a hypothetic mental representation and the response properties of single neurons. The outside world is not copied inside the head, neither literally nor metaphorically; rather, the response properties of neurons (and brain regions) correlate with certain real-world features. As such, the relationship between a mental representation and a neural one is unlikely to be straightforward. The electrophysiological method of single-cell recordings" - page 33 of The Student's Guide to Cognitive Neuroscience 2nd edition by Jamie Ward​

Lots of research has been conducted into the nature of representations. It is taken as axiomatic that representations exist. Proponents of embodied cognition (http://en.wikipedia.org/wiki/Embodied_cognition) would disagree, but I really, really dislike their views. Here is some more about the nature of representations from my textbook:

Rolls and Deco (2002) distinguish between three different types of representation that may be found at the neural level:

1. Local representation. All the information about a stimulus/event is carried in one of the neurons (as in a grandmother cell).
2. Fully distributed representation. All the information about a stimulus/event is carried in all neurons of a given population.
3. Sparse distributed representation. A distributed representation in which a small proportion of the neurons carry information about a stimulus/event. - page 35 of The Student's Guide to Cognitive 2nd edition by Jamie Ward​

What VS Ramachandron is claiming is that the manipulation of higher-order representation underlies our consciousness. He refers to this as metarepresentation. Here is some more information regarding his position:

V.S. Ramachandran hypothesizes that a new set of brain structures evolved in the course of hominization in order to transform the outputs of the primary sensory areas into what he calls “metarepresentations”. In other words, instead of producing simple sensory representations, the brain began to create “representations of representations” that ultimately made symbolic thought possible, and this enhanced form made sensory information easier to manipulate, in particular for purposes of language.

One of the brain structures involved in creating these metarepresentations would be the inferior parietal lobe, which is one of the youngest regions in the brain, in terms of evolution. In humans, this lobe is divided into the angular gyrus and the supramarginal gyrus, and both of these structures are fairly large. Just beside them is Wernicke’s area, which is unique to human beings and is associated with understanding of language.

According to Ramachandran, the interaction among Wernicke’s area, the inferior parietal lobe (especially in the right hemisphere), and the anterior cingulate cortex is fundamental for generating metarepresentations from sensory representations, thus giving rise to qualia and to the sense of a “self” that experiences these qualia.
http://thebrain.mcgill.ca/flash/a/a_12/a_12_cr/a_12_cr_con/a_12_cr_con.html
 
  • #22
Metarepresent said:
Lots of research has been conducted into the nature of representations. It is taken as axiomatic that representations exist. Proponents of embodied cognition (http://en.wikipedia.org/wiki/Embodied_cognition) would disagree, but I really, really dislike their views. Here is some more about the nature of representations from my textbook:
I don't deny that representations exist, but i do not think they exist outside of consciousness. Physically, neurons (or other physical objects) just consist of atoms (and the elementary particles that they consist of) and those in themselves do not represent something, unless it is in the context of a mind utilising them, as with those 3 types of representations you quote. The neurons or single cells are located within the brain and so they are spoken of as if they supply information to it and thus the mind.

What VS Ramachandron is claiming is that the manipulation of higher-order representation underlies our consciousness. He refers to this as metarepresentation. Here is some more information regarding his position:

V.S. Ramachandran hypothesizes that a new set of brain structures evolved in the course of hominization in order to transform the outputs of the primary sensory areas into what he calls “metarepresentations”. In other words, instead of producing simple sensory representations, the brain began to create “representations of representations” that ultimately made symbolic thought possible, and this enhanced form made sensory information easier to manipulate, in particular for purposes of language.

One of the brain structures involved in creating these metarepresentations would be the inferior parietal lobe, which is one of the youngest regions in the brain, in terms of evolution. In humans, this lobe is divided into the angular gyrus and the supramarginal gyrus, and both of these structures are fairly large. Just beside them is Wernicke’s area, which is unique to human beings and is associated with understanding of language.

According to Ramachandran, the interaction among Wernicke’s area, the inferior parietal lobe (especially in the right hemisphere), and the anterior cingulate cortex is fundamental for generating metarepresentations from sensory representations, thus giving rise to qualia and to the sense of a “self” that experiences these qualia.
http://thebrain.mcgill.ca/flash/a/a_12/a_12_cr/a_12_cr_con/a_12_cr_con.html
It looks like he's talking about the evolution of representations (consciousness), how they get more complex, when there is already an initial representation to work with. Id like to know where that initial representation came from.
 
  • #23
apeiron said:
I can supply a bookshelf of references.
Sure, but will it be relevant to this thread? I don't think so, but will open a new thread. :wink:

JDStupi said:
It seems as though some are speaking about a specifically human sense of self, the one we are aquainted with and experience, and are speaking about the necessary conditions for the existence of our sense of self, not neccessarily any sense of self.
Well said. So the question: is the hard problem hard because of what is making our sense of self specific to humans, or because of the part shared with many animals?

apeiron said:
An animal just "is" conscious.
This is directly and specifically relevant to the thread. The problem with any approach putting to much emphasized on langage is that all what make the consciousness a hard problem is already present when we're looking at this 'just' conscious phenomena. That's why I don't see how Vygostky can be relevant to this question. Not to be rude in anyway, nor to say it's not interesting for some purposes. I'm just saying langage is not the good place to start with when one is interested in the hard question of subjectivity.

apeiron said:
the hard problem argument has in fact been incredibly damaging in my view. (...) Once you understand the nature of global constraints and downward causality, the hard problem evaporates IMHO.
Well if you dismiss the hard problem I can understand why you didn't get my point. I don't think you should dismiss the existence of the hard problem based of some misguided attempts to solve it, no more than Newton's attempts with alchemestry should dimiss his idea about gravitation.

apeiron said:
Of course there is still the objective description vs the subjective impression distinction. But that applies to all modelling because that is what modelling is about - objectifying our impressions.
Deeper than that, IMHO. For example, how can we know that someone likely in coma is or is not conscious? In retrospect we know some have been misclassified because we found a way to communicate with them. But will we one day be able to say that someone is NOT conscious 'just' by looking his brain activity?
 
  • #24
pftest said:
I don't deny that representations exist, but i do not think they exist outside of consciousness. Physically, neurons (or other physical objects) just consist of atoms (and the elementary particles that they consist of) and those in themselves do not represent something, unless it is in the context of a mind utilising them, as with those 3 types of representations you quote. The neurons or single cells are located within the brain and so they are spoken of as if they supply information to it and thus the mind.

Yeah, I adopt the stance of identity theory:

the theory simply states that when we experience something - e.g. pain - this is exactly reflected by a corresponding neurological state in the brain (such as the interaction of certain neurons, axons, etc.). From this point of view, your mind is your brain - they are identical" http://www.philosophyonline.co.uk/pom/pom_identity_what.htm​

A neural representation is a mental representation.

It looks like he's talking about the evolution of representations (consciousness), how they get more complex...

No, he is claiming it is with the manipulation of metarepresentations ('representations of representations') that we engage in consciousness. Many other other species have representational capacities, but not to the extent of humans.
 
Last edited:
  • #25
Lievo said:
Sure, but will it be relevant to this thread? I don't think so, but will open a new thread. :wink:

Well said. So the question: is the hard problem hard because of what is making our sense of self specific to humans, or because of the part shared with many animals?

This is directly and specifically relevant to the thread. The problem with any approach putting to much emphasized on langage is that all what make the consciousness a hard problem is already present when we're looking at this 'just' conscious phenomena. That's why I don't see how Vygostky can be relevant to this question. Not to be rude in anyway, nor to say it's not interesting for some purposes. I'm just saying langage is not the good place to start with when one is interested in the hard question of subjectivity.

Well if you dismiss the hard problem I can understand why you didn't get my point. I don't think you should dismiss the existence of the hard problem based of some misguided attempts to solve it, no more than Newton's attempts with alchemestry should dimiss his idea about gravitation.

Deeper than that, IMHO. For example, how can we know that someone likely in coma is or is not conscious? In retrospect we know some have been misclassified because we found a way to communicate with them. But will we one day be able to say that someone is NOT conscious 'just' by looking his brain activity?

Generally here you make a bunch of expressions of doubt, which require zero intellectual effort. I, on the other hand, have made positive and specific claims in regards of the OP (which may have been rambling, but is entitled "What or where is our real sense of self?")

So what I have said is start by being clear about what "conciousness" really is. The hard part is in fact just the structure of awareness. The human socially evolved aspects are almost trivially simple. But you have to strip them away to get down to the real questions.

Now on to your claim that my attempts to solve or bypass the hard problem by an appeal to systems causality are misguided. Care to explain why? Or should I just take your word on the issue :zzz:. Sources would be appreciated.
 
  • #26
apeiron said:
Now on to your claim that my attempts to solve or bypass the hard problem by an appeal to systems causality are misguided. Care to explain why?
Sorry I was unclear. Maybe this explain your tone? I was trying to express the idea that your point (the hard problem argument has in fact been incredibly damaging in my view) should not push you to evacuate the very existence of a hard problem.

apeiron said:
So what I have said is start by being clear about what "conciousness" really is.
Yep. My point is simply that the problem is, IMHO, not where you think it is. If we were able to construct a robotic spirit as good as, say, the one of a Chimp, then I doubt the human version would remain a problem. I may be wrong and that Vigotsky will then be needed, but the fact is that we can't reach this point now. So obviously there is something we do not understand, and obviously this can't be answered by an approach that put all emphazise on human language. If you still think there is no problem except for simulating human spirit specifically, then let's agree to disagree.
 
Last edited:
  • #27
Lievo said:
Sorry I was unclear. Maybe this explain your tone? I was trying to express the idea that your point (the hard problem argument has in fact been incredibly damaging in my view) should not push you to evacuate the very existence of a hard problem.

I don't think you have explained why a systems approach does not in fact evacuate(!) the very existence of the hard problem. I await your elucidation.
 
  • #28
So the question: is the hard problem hard because of what is making our sense of self specific to humans, or because of the part shared with many animals?

I would surmise that it would be the part shared by both. The hard problem being based around the divide between "what it feels like" and what its "objective" manifestations are, the animal most likely does have qualia and as such there is a hard problem for it as there is for us.

[...]The problem with any approach putting too much emphasized on langage is that all what make the consciousness a hard problem is already present when we're looking at this 'just' conscious phenomena[...] I'm just saying langage is not the good place to start with when one is interested in the hard question of subjectivity.

What if you were to expand the idea of starting with "language" to the idea of starting with symbols? What if you were to expand the notion of starting with "symbols" to the idea of generalization? What is the relationship between generalization and symbolism?

For example, how can we know that someone likely in coma is or is not conscious? In retrospect we know some have been misclassified because we found a way to communicate with them. But will we one day be able to say that someone is NOT conscious 'just' by looking his brain activity?

I don't see why this is that far out of the reach of science at all. I am not entirely familar with brain imaging techniques, but I would imagine that they would get more sophisticated, not only technologically, but also in our ability to interpret them. I don't see this as the "hard problem" though, this is isn't tied up with the completely internal subjective description, this is a form of inferring from knowledge about relations to saying whether or not somebody is "conscious" and doesn't seem too much more complex than saying that somebody won't be able to do x, y, or z based on knowledge about brain damage.

Here, I do not claim to have more knowledge of what constitutes "awareness", and the relationships between "awareness" and "subjectivity", but it does seem that we must trace it biologically much further down than humans in order to properly understand it.


Lots of research has been conducted into the nature of representations. It is taken as axiomatic that representations exist. Proponents of embodied cognition (http://en.wikipedia.org/wiki/Embodied_cognition) would disagree, but I really, really dislike their views

Here, my question to you, is why do you "really really dislike" the views of embodied cognition theorists? I do not wish to challenge your position, because admittedly I do not have a large knowledge of embodied cognition and, as such, my conceptions of it may be erroneous. That said, from what I have read they seem to have extremely interesting views that have the possibility of poviding fresh perspective in cognitive research. While you may not subscribe completely to the views of the school, it may be silly to throw out all insights due to certain disliked ones. The ideas that we are rooted in our bodies and environment and are continually in interaction with both of them and our cognitive abilities arise through interactions with our body and environment does not seem like something that should be that controversial. Looking at it biologically it actually seems quite plausible. I do not know their views on representations, though, and it seems whatever their view may be on that is what you dislike.
It seems to me that embodied cognition has grown away from the "computational" approach to mind which equates the mind to a computer and looks at it in terms of syntactical operations. To me, it seems as though adaptivity is a necessary condition for intelligence. Adaptivity is a notion that is not understandeable except through interactions with an einvironment, which is "mediated" by the body (and I quote mediated because the wording could come out to seem as though there is some "us" mediated by our body, which will lead us into some complext substance dualism"). The idea that we do not have a computational database of "input-output" activity that is stored in some "memory" form through environment interactions, simply doesn't seem as efficient as some general adaptive means of learning based around patterns and interaction with the environment. However, if one were to say "there are no representations" I would disagree, for what I see as the necessity for representation by virtue of the way in which we know our brain to operate.
In terms of your list of representations, it would seem to me that considering areas of embodied cognition and its proposals about situated thinking would lead to a better understanding of the temporal representation of sequences and patterns of neuronal networks interacting with the environment/body and each other through time.

As a side note, what I think will be, possibly already is, a growing and important field of study for these questions is some general mathematical theory of learning. It certainly seems as though mathematically modelling some learning process as simple as Pavolovian conditioning and then extending it to the interaction of similar learning processes and building networks of such processes would be greatly beneficial to the understanding of learning.
 
  • #29
apeiron said:
why a systems approach does not in fact evacuate(!) the very existence of the hard problem.
Because you can describe a philosophical zombie using a system approach. Because a system approach would not give us a criterion to prove someone unconscious. Because there is no need for subjectivity to explain function. All the same answer.

I guess you do not need any reference, but let's put it anyway:
http://consc.net/online/1.2d
 
  • #30
Now apeiron, I'm just asking if I have this right: You are saying that a complex systems perspective on the nature of awareness, human or otherwise, dissolves the "hard problem" because due to its multi-layered, multi-directional causal explanations the nature of the divide between neural machinery and experience is largely closed. Despite this, though, some would claim that the "hard problem" still exists because, in principle, what it is like to "feel" something first hand is still unexplained. To this you would agree, but say that this "hard problem" will remain unsolved for the sole reason that experience comes first and all theories and attempts at "objectification" are the result of second-order mappings of experience. Subjective experience, by defintion, can only be felt by one and so the hard problem in that sense will not be solved, but this is accepted axiomatically and we proceed to model further. So in this sense, the hard problem is solved because the systems perspective provides a non ad-hoc view about the emergence of such sophisticated aspects of reality, but by its starting point as modelling it is unable to claim to "reduce" first person experience and this is not the aim.
It is largely an epistemological acceptance that "the hard problem" is not necessarily a "problem" at all, but simply a fact of nature.
If I have some of this wrong, or you disagree/agree/ have comments please feel free to share, I'm simply trying to see everyones positions and am interested in a dynamical perspective on things, as you seem to promote.

__________________________________________________________________________
*figured I would edit rather than add a whole new post*

To Lievo

Because there is no need for subjectivity to explain function.

As a side note, you may want to read what Vygotsky has to say about the relation between subjectivity and function in the essay I posted before. He speaks about subjectivity as emerging from a "proprioceptive" field of reflexological rections ("Internal reflex arcs"), which by virtue of being regulations of the internal environment/state of the system itself can aid in the explanation of why subjective processes are only accessible to the individual alone. Certainly, it would seem to me, that such broad behavioural, reflexological views on subjectivity and awareness are not human-centric and can be applied elsewhere.
 
Last edited:
  • #31
JDStupi said:
I would surmise that it would be the part shared by both. The hard problem being based around the divide between "what it feels like" and what its "objective" manifestations are, the animal most likely does have qualia and as such there is a hard problem for it as there is for us.
That's exactly my point. :approve:

JDStupi said:
What if you were to expand the idea of starting with "language" to the idea of starting with symbols? What if you were to expand the notion of starting with "symbols" to the idea of generalization? What is the relationship between generalization and symbolism?
What a great question! Yes, I do believe that the ability to generalise is the good place to start with. Not sure it'll solve the problem however, but that's what I'm trying.----------

As a side note, you may want to read what Vygotsky has to say about the relation between subjectivity and function in the essay I posted before. He speaks about subjectivity as emerging from a "proprioceptive" field of reflexological rections ("Internal reflex arcs"), which by virtue of being regulations of the internal environment/state of the system itself can aid in the explanation of why subjective processes are only accessible to the individual alone. Certainly, it would seem to me, that such broad behavioural, reflexological views on subjectivity and awareness are not human-centric and can be applied elsewhere.
Well, another obvious problem is that a spinal section supress proprioception but has never prevented anyone to be conscious. I'll take a look anyway. ;)
 
Last edited:
  • #32
Lievo said:
Deeper than that, IMHO. For example, how can we know that someone likely in coma is or is not conscious? In retrospect we know some have been misclassified because we found a way to communicate with them. But will we one day be able to say that someone is NOT conscious 'just' by looking his brain activity?

This is of course a topic with a large literature as organ harvesting depends on an acceptable criteria for brain death.

The answer of the 1968 Harvard Brain Death Committee was that a loss of the brain's global integrative capacity was the key measure. So not actually something easy to measure, but clearly a systems-based approach to the what would be the right answer.

There are dissenting views. UCLA epilepsy specialist Alan Shewmon has argued death has not truly occurred until life processes have ceased even at the cellular or mitochondrial level.

And while the Catholic Church seems happy with "the irretrievable loss of the brain's integrative capacity" as the point where the immortal soul departs the material body :smile:, other faiths, such as Japanese Shintoism and Judaism, consider the body to be sacred and still alive while there is any sign of on-going physiological activity.

So in an arena of life where the answer in fact matters, the systems approach (rather than QM or panpsychism or whatever) was the best scientific answer.
 
  • #33
JDStupi said:
Now apeiron, I'm just asking if I have this right: You are saying that a complex systems perspective on the nature of awareness, human or otherwise, dissolves the "hard problem" because due to its multi-layered, multi-directional causal explanations the nature of the divide between neural machinery and experience is largely closed.

Correct. Though it would be a claim still to be argued out in full detail of course.

I would also not frame it as local neural machinery~global experiencing as this reintroduces the very dualism that I want to avoid. Clearly these are words for two quite different things, so how can the rift have been healed?

Instead, based on systems arguments such as Stan Salthe's scalar hierarchy theory, I would talk about things that are the same essentially, but which become different simply because they act from opposing extremes of scale.

So for example, there is Stephen Grossberg's ART neural nets where the global scale (long term memory) acts downwards to constrain the actions of the local scale (short term memory). The causality flows both ways as of course the actions of short term memory act upwards in a constructive, additive, fashion to adjust the state of long term memory. The system learns in this fashion.

And quite explicitly, the two levels of action are not dualistically different but fundamentally the same. Both are "memory". Everything else arise because memory is acting across different spatiotemporal scales, forcing one to be the prevailing context, the other the momentary perturbation of a localised event.

It is the same with Friston's Bayesian brain networks. The system generates global predictive states that constrain from the top down. Then it learns from localised errors of prediction that feed back up to adjust the systems future predictions.

I like to use language even more descriptive of "experiencing" that still respects the background systems modelling. So I often talk about ideas (instead of long term memories) that act downwards to constrain, and impressions (instead of short term memories) that act upwards to construct (the ideas that are the long-run, predictive, constraints).

So there is indeed a closed loop that leaves nothing out causally (so it is claimed, and the argument against it would have to show what in fact is being left out).

Despite this, though, some would claim that the "hard problem" still exists because, in principle, what it is like to "feel" something first hand is still unexplained. To this you would agree, but say that this "hard problem" will remain unsolved for the sole reason that experience comes first and all theories and attempts at "objectification" are the result of second-order mappings of experience.

Correct again. At the end of the day, we may indeed feel that the detail of our visual system explains why we experience blackish yellow as brown. So we can end up with a heck of a lot explained. But not then why brown is brownish. Or red, reddish.

So what is the nature of this final residue of difficulty? Really I think it comes down to the fact that models fail when they can no longer make distinct measurements.

Now for forest green, scarlet and navy, you have three colours that can be subjectively seen as mixtures. And this can be contrasted with colours that seem like pure hues. So put together the neuroscience story on the visual pathways (surround effects that blacken primary hues) and the subjectively observable difference, and the phenomenology seems legitimately explained away.

However for pure hues, it is no longer possible to subjectively experience an alternative. You can't imagine that instead of red, you might see bibble, or rudge, or whatever. There is no subjective measurement of "other" that can be made, even if perhaps there is a very full objective neuroscience account of everything your visual paths happen to be doing.

With brown, you can in fact surprise yourself that it is actually yellow you are looking at if you have the right lab lighting conditions. If you darken the surround in the right way, a chocolate bar becomes yellowish. And there is also an exact explanation of why in the way the visual system manufactures yellow as an opponent hue to blue at a later stage of the processing chain.

However brown still looks brown subjectively. And why it looks like that rather than rudge or bibble is again back into the area of no longer being able to subjectively observe a difference. A neuroscientist could give you all the model you want, but you can't make a measurement that would show, tweak this, tweak that, and your subjective state changes accordingly.
 
  • #34
Lievo said:
Because you can describe a philosophical zombie using a system approach.

Can you cite a reference to support this claim. Or would you prefer to make the argument yourself?

In the meantime, here is yet another voice speaking the systems message. Just a recent paper from someone reinventing the wheel on the subject, but it shows what is going on out there in research land.

Abstract
Mental causation is a philosophical concept attempting to describe the
causal effect of the immaterial mind on subject´s behavior. Various types of causality have different interpretations in the literature. I propose and explain this concept within the framework of the reciprocal causality operating in the brain bidirectionally between local and global brain levels.

While committing myself to the physical closure assumption, I leave room for the suggested role of mental properties. Mental level is viewed as an irreducible perspective of description supervening on the global brain level. Hence, mental causation is argued to be interpreted as a convenient metaphor because mental properties are asserted to be causally redundant.

Nevertheless, they will eventually help us identify and understand the neural and computational correlates of consciousness. Within cognitive science, the proposed view is consistent with the connectionist and dynamic systems paradigms, and within the philosophy of mind, I see it as a form of non-reductive physicalism.

http://www.bicsconference.org/BICS2010online-preprints/ModelsOfConsciousness/36.pdf
 
  • #35
Hi Metarepresent and welcome to the forums!

You can do a search in the forums, this kind of https://www.physicsforums.com/search.php?searchid=2539069 [Broken] was discussed here many times.

Basically to have a sense of self you must have a single unified mental state, so for example a person suffering from multiple personality disorder or a split brain patient do not experience the same sense of self as other people. They have two or multiple mental states. But again they are conscious just as we are. No matter which self is present, it is just as conscious as the others. So this sense of self seems to be elusive, epiphenomenal.
[PLAIN]http://kostas.homeip.net/reading/Science/Ramachandran/PhantomsInTheBrain.htm said:
So[/PLAIN] [Broken] I hooked up the student volunteers to a GSR device while they stared at the table. I then stroked the hidden hand and the table surface simultaneously for several seconds until the student started experiencing the table as his own hand. Next I bashed the table surface with a hammer as the student watched. Instantly, there was a huge change in GSR as if I had smashed the student's own fingers.


Nowadays, the two materialistic theories about consciousness (mind-brain identity and functionalism) claim that consciousness is either in the neurons or in the functioning of the brain, so in the first case human brain like creatures are marked as conscious, while in the second every "creature" functioning in a certain way could be conscious. In the mind-brain identity theory mental states are reduced to physical, while in the functionalism the mental is something which emerges as a property of the physical, but can't be reduced to it. You can read what follows from these two views here - a great analysis by Jaegwon Kim.
[PLAIN]http://www.iep.utm.edu/mult-rea/ said:
They[/PLAIN] [Broken] could (a) deny the causal status of mental types; that is, they could reject Mental Realism and deny that mental types are genuine properties. Alternatively, they could (b) reject Physicalism; that is, they could endorse the causal status of mental types, but deny their causal status derives from the causal status of their physical realizers. Or finally, they could (c) endorse Mental Realism and Physicalism, and reject Antireductionism.
 
Last edited by a moderator:
<h2>1. What is the concept of "self" in science?</h2><p>The concept of "self" in science refers to the individual's unique identity, including their thoughts, feelings, and behaviors. It is also known as the sense of self or self-awareness, and is believed to be shaped by both genetic and environmental factors.</p><h2>2. Is our sense of self determined by nature or nurture?</h2><p>There is no simple answer to this question as both nature and nurture play a role in shaping our sense of self. While genetics may influence certain traits and characteristics, our environment and experiences also play a significant role in shaping our sense of self.</p><h2>3. Can our sense of self change over time?</h2><p>Yes, our sense of self is not fixed and can change over time. This can be due to various factors such as life experiences, relationships, and personal growth. Our sense of self is constantly evolving and can be influenced by both internal and external factors.</p><h2>4. How does culture impact our sense of self?</h2><p>Culture plays a significant role in shaping our sense of self. Our cultural background, beliefs, and values can influence how we perceive ourselves and our place in the world. It can also impact our behaviors and attitudes towards ourselves and others.</p><h2>5. Is our sense of self the same as our physical body?</h2><p>No, our sense of self is not the same as our physical body. While our physical body is a part of our identity, our sense of self also includes our thoughts, emotions, and experiences. It is a complex and dynamic concept that goes beyond just our physical appearance.</p>

1. What is the concept of "self" in science?

The concept of "self" in science refers to the individual's unique identity, including their thoughts, feelings, and behaviors. It is also known as the sense of self or self-awareness, and is believed to be shaped by both genetic and environmental factors.

2. Is our sense of self determined by nature or nurture?

There is no simple answer to this question as both nature and nurture play a role in shaping our sense of self. While genetics may influence certain traits and characteristics, our environment and experiences also play a significant role in shaping our sense of self.

3. Can our sense of self change over time?

Yes, our sense of self is not fixed and can change over time. This can be due to various factors such as life experiences, relationships, and personal growth. Our sense of self is constantly evolving and can be influenced by both internal and external factors.

4. How does culture impact our sense of self?

Culture plays a significant role in shaping our sense of self. Our cultural background, beliefs, and values can influence how we perceive ourselves and our place in the world. It can also impact our behaviors and attitudes towards ourselves and others.

5. Is our sense of self the same as our physical body?

No, our sense of self is not the same as our physical body. While our physical body is a part of our identity, our sense of self also includes our thoughts, emotions, and experiences. It is a complex and dynamic concept that goes beyond just our physical appearance.

Similar threads

  • Quantum Physics
5
Replies
143
Views
5K
  • General Discussion
Replies
1
Views
1K
Replies
3
Views
2K
Replies
1
Views
633
  • General Discussion
Replies
4
Views
2K
  • Quantum Interpretations and Foundations
Replies
4
Views
934
Replies
212
Views
40K
  • General Discussion
Replies
20
Views
3K
  • Programming and Computer Science
Replies
2
Views
1K
Replies
99
Views
11K
Back
Top