What or where is our real sense of self?

  • Thread starter Metarepresent
  • Start date
  • Tags
    Self
In summary, there is no general consensus among experts in philosophy or cognitive neuroscience about the concept of the self. Questions about consciousness and the self have been ongoing, and the acquisition of knowledge is necessary to form a cohesive theory. Some essential questions that must be answered include the role of neural activity in consciousness, the nature of subjectivity, and how the self emerges. There are also issues that all theories of consciousness must address, such as binding, qualia, and the Cartesian Theatre. The theory of self-representation is appealing to some, as it posits that higher-order representations play a significant role in the origin of subjectivity. This is supported by the idea that metarepresentations, created by a second brain, allow humans to consciously
  • #36
apeiron said:
Can you cite a reference to support this claim.
https://www.physicsforums.com/showpost.php?p=3135015&postcount=29".

apeiron said:
This is of course a topic with a large literature as organ harvesting depends on an acceptable criteria for brain death (...) the systems approach (rather than QM or panpsychism or whatever) was the best scientific answer.
First news the current practices are based on a philosophical system approach! I'd have say it is based on common sense and that no one give 'the system approach ' a **** when it comes to medical practices. Always happy to learn something.
 
Last edited by a moderator:
Physics news on Phys.org
  • #37
Metarepresent said:
Also, before continuing this long diatribe: what are the best phenomenal characteristics we may attribute to consciousness? I believe unity, recursive processing style and egocentric perspective are the best phenomenal target properties attributed to the ‘self’.
Hi Meta. I must have missed this one. What Ramachandran and others are referring to when they refer to the "phenomenal characteristics" of consciousness is in fact what I was referring to previously. Phenomenal consciousness regards the phenomenal characteristics, what something feels like, the experience of it, the subjective experience that we have. The sense of unity is a phenomenal characteristic in the sense that it is a phenomena that is subjective and it is a feeling or experience. I don’t know exactly what you mean by recursive processing style but I presume this regards how neurons interact and/or the various higher order synaptic connections within the brain. If that’s the case, then this wouldn’t be a phenomenal characteristic of consciousness, it would be a psychological one (for lack of a better description) since it would be objectively definable and knowable. How neurons interact and give rise to conscious experience or metarepresentations is not a phenomenal characteristic of the brain or mind. The experience itself and metarepresentations are phenomenal characteristics. But how neurons interact to give rise to these phenomenal characteristics is not. They are purely neurological characteristics that can be objectively defined. Clearly, Ramachandran takes this view. See for example this YouTube video:


Metarepresent said:
Yeah, I adopt the stance of identity theory:

the theory simply states that when we experience something - e.g. pain - this is exactly reflected by a corresponding neurological state in the brain (such as the interaction of certain neurons, axons, etc.). From this point of view, your mind is your brain - they are identical" http://www.philosophyonline.co.uk/pom/pom_identity_what.htm​

A neural representation is a mental representation.

No, he is claiming it is with the manipulation of metarepresentations ('representations of representations') that we engage in consciousness. Many other other species have representational capacities, but not to the extent of humans.

That’s correct. Ramachandran takes a very basic, uncontroversial computationalist approach to consciousness. I think first you need to clarify the difference between phenomenal characteristics though. Ramachandran is pointing out that certain circuits in the brain create phenomenal consciousness and it is these circuits that are ‘in control’ in some fashion. Further, he is suggesting as I’m sure you are aware, that there is what’s commonly referred to as “a one-to-one” relationship between the neuronal activity and the phenomenal experience that is responsible for this mental representation. Ramachanadran takes the fairly uncontroversial stance that the overall phenomenal experience (and the many parts of that experience) is what equates to the mental representation.

All this isn’t to say that our present approach to consciousness is without difficulties as Ramachandran points out in the video. Some very serious issues that pop out of this approach include “mental causation” and the problems with downward causation (ie: strong downward causation) and the knowledge paradox. So once we understand the basic concept of mental representation as presented for example by Ramachandran, we then need to move on to what issues this particular model of the human mind gives rise to and look for ways to resolve those issues.
 
Last edited by a moderator:
  • #38
JDStupi said:
It seems as though some are speaking about a specifically human sense of self, the one we are aquainted with and experience, and are speaking about the necessary conditions for the existence of our sense of self, not neccessarily any sense of self.

... On one hand, when we introspect or look at our sense of self we find that our I is built upon a vast network of memories, associative connections, generalized concepts, a re-flective (to bend back upon) ability, and embedding within a specific socio-cultural milieu. On the other hand we see that animals possesses subjective states such as pain, pleasure etc etc, which leads us to believe that animals too have an I The problem then is in what sense is the animal I and the human I the same in meaning or reference?

Lievo said:
So the question: is the hard problem hard because of what is making our sense of self specific to humans, or because of the part shared with many animals?

... The problem with any approach putting to much emphasized on langage is that all what make the consciousness a hard problem is already present when we're looking at this 'just' conscious phenomena...


My comments were certainly aimed at what’s specifically human in “consciousness” and “sense of self.” I want to explain why that seems to me the fundamental issue and how it is bound up with language. I’m sorry for the length of this post... I’d be briefer if I thought I could be clear in less space.

First, I would say the “hard problem” is hard only because of our confusion between what’s specific to humans and what we share with other animals. When we treat “consciousness” as synonymous with “awareness” and talk about animals “possessing subjective states such as pain, pleasure, etc” or “having an ‘I’” we’re doing something that’s very basic to human consciousness – i.e. projecting our own kind of awareness on others, imagining that they see and feel and think like we do.

This is basic to our humanity because it’s the basis of human communication – experiencing other people as “you”, as having their own internal worlds from which they relate to us as we relate to them. This is what’s sometimes called “mindreading” in psychology. And if you go back to a primitive enough level, it’s something we share with other mammals, who are certainly able to sense and respond to what another animal is feeling.

This is one of a great many very sophisticated, highly evolved functions of the mammalian brain, that let animals focus their attention according to all kinds of very subtle cues in their environment. These abilities are quite amazing in themselves, but if you talk to biologists or ethologists I doubt you’ll find many who believe there is a “hard problem” here. It’s clear why these capacities evolved, it’s clear how evolution works, and there’s no mystery any more about the relationship of evolving self-reproducing organisms to the physical world.

Now what’s special about humans is not our neural hardware but the software we run on it, namely human language. It’s not at all similar to computer software (and our neural hardware is equally different from computer hardware) – but even so the term is a good one, because it distinguishes between what’s “built in” to our brains genetically and what’s “installed” from the outside, through our interpersonal relationships, as we grow up.

Part of the problem we have in understanding the situation with “consciousness” is that by now this software has evolved to become extremely sophisticated – it gives us all kinds of tools for directing our attention and responding to our environment that go far beyond what other animals do. And we were already very well versed in using much of this attention-focusing (“representing”) software when we were only 3 years old, long before we ever began to think about anything. We learned to use words like “you” and “me” long before we had any capacity for reflection.

So one lesson is – there’s no absolute difference between humans and other animals, just as there’s none between living beings and non-living matter. In both cases there comes to be a vast difference, because of a long, gradual evolutionary process that developed qualitatively new kinds of capacities.

Another lesson is – when we try to think reflectively about ourselves and the world, we’re using sophisticated conceptual tools that are built on top of even more sophisticated linguistic software, through which we interpret the world unconsciously, i.e. automatically. For example, we automatically tend to project our own kind of awareness on other people and animals and even (in many pre-industrial cultures) on inanimate objects in the world around us.

So as to the “hard problem” – when we humans look around and “perceive” things around us, we’re just as completely unconscious of the vast amount of linguistic software-processing involved, as we are of the vast amount of neural hardware-processing that’s going on. It’s very easy to imagine we’re doing something like what a cat does when it looks around, “just seeing.” And if we start thinking about it reflectively, we may very easily convince ourselves that the cat “must have an internal subjective world of sensation” or even “a self.” And now we do have a hard problem, one that will never be solved because its terms will never even be clearly defined.

We end up talking about “what it’s like to be a bat,” for example, as if that clarified the issue. But the difference between what it’s like to be you and what it’s like to be a non-talking animal is on the same order of magnitude as the difference between a stone and a living cell. It’s just that we’re not as far along at understanding our humanity as we are at understanding biology.
 
  • #39
JDStupi said:
The hard problem being based around the divide between "what it feels like" and what its "objective" manifestations are, the animal most likely does have qualia and as such there is a hard problem for it as there is for us.

Here, I do not claim to have more knowledge of what constitutes "awareness", and the relationships between "awareness" and "subjectivity", but it does seem that we must trace it biologically much further down than humans in order to properly understand it.


I agree that there's an aspect of the subject/object divide that goes back a long way in evolution. There’s a basic difference in perspective between seeing something “from outside” – the way we see objects – and seeing “from inside”, from one’s own point of view.

In fact, I think this kind difference is important even in physics. Physics has it’s own “hard problem” having to do with the role of “the observer” in quantum mechanics. It seems that not even the physical world is fully describable objectively, “from outside” – at a fundamental level, it seems that you have to take a point of view inside the web of physical interaction in order to grasp its structure.

So it may well turn out be meaningful not only to talk about the point of view of an animal, but the point of view of an atom.

The problem with treating this as a problem of “consciousness” – as even some reputable physicists do, sadly – is what I was trying to get at in my previous post. When we do that, we unconsciously import all kinds of unstated assumptions that an animal’s point of view on the world or an atom’s must be similar to our own.

Before anyone begins to ask questions about the relation of our conscious experience to that of an animal, I would recommend that they spend some time thinking about the very great differences that exist between the conscious experience of humans, say, in oral cultures and in literate culture. (Look up Walter Ong’s work or Eric Havelock’s, for example.) That helps give a sense of how radically different one’s “consciousness” and “sense of self” can be, even when the underlying language has hardly changed.

So the gist of my position is not that only humans have some sort of “internal life” going on in their brains. It’s that we tend to use terms like “consciousness” and “subjectivity” and “representation” to cover a vast range of very different things. If we want to understand these issues better, it’s the differences we should be focusing on.
 
  • #40
So the gist of my position is not that only humans have some sort of “internal life” going on in their brains. It’s that we tend to use terms like “consciousness” and “subjectivity” and “representation” to cover a vast range of very different things. If we want to understand these issues better, it’s the differences we should be focusing on.

Certainly, this is exactly what I was getting at, and I completely agree. When I asked in what way the two "I"'s were the same in meaning or reference, essentially what I was getting it is what you said: that the ideas of "I","awareness", and "self" are not particulars we refer to with established defintions , but rather broad classes of things. We cannot seek some one explanatory principle that will illuminate the "nature of consciousness/awareness" and attempt to apply it to everything, for the sole reason that "something that explains everything, explains nothing", everything will just blend together in some indistinguishable blob, rather than being able to see the differences and their working interrelations and from this further sharpen our ideas on these subjects.


The problem with treating this as a problem of “consciousness” – as even some reputable physicists do, sadly

I agree with this also, those physicists who jump from the measurement problem's interactive aspects directly to "consciousness must be the source of weirdness" are making terribly large unjustified jumps and are suffering terribly from "Good Science/Bad Philosophy". But we shouldn't get into that here.

So as to the “hard problem” – when we humans look around and “perceive” things around us, we’re just as completely unconscious of the vast amount of linguistic software-processing involved, as we are of the vast amount of neural hardware-processing that’s going on. It’s very easy to imagine we’re doing something like what a cat does when it looks around, “just seeing.” And if we start thinking about it reflectively, we may very easily convince ourselves that the cat “must have an internal subjective world of sensation” or even “a self.”

This much I agree with also. In principal, I agree that there exists a "hard problem of consciousness" for both humans and animals, because I do believe that animals have qualia and internal states. Though, what you touch upon is that methodologically, in our present state of knowledge, it would be silly to attempt to come up with a defintive criterion for "awareness" and "self" and the like, that we must attempt to come up with some means of description that is somewhat independant of our folk-psychological notions of consciousness. These questions will only be answered by examining the many variegated processes that occur at all levels of biological organization, and moreover, studying specifically human forms of consciousness will most certainly shed light on the issues, because we will come to understand the nature of our perception better, and the ideas generated in the study of the emergence of human awareness and "self" will IMO undoubtedly prove usefull for ideas in other biological fields.

And yes, taking cues from older Gestalt views on psychology, the nature of our internal awareness is structured in various gestalts, our perception of the moment is not the sum total of external qualia, but rather an integrated gestalt which is the structure of our internal experience. Drastically different cultures which focus on drastically different aspects of human life will almost certainly have a completely different structuring of their internal perceptual manifold. This is relevant because, how could we hope to understand the nature of "what it is like to be a bat" from an internal perspective if we can barely understand what it is like to be in a completely different culture, or even what it is like to be the guy next door. This, I believe, is simply an unsolvable problem.
 
  • #41
Lievo said:

You claimed to have a specific argument against a systems approach. Let's hear it. Or if you want to evade the question by pointing at a page of links, identify the paper you believe supplies the argument.

Lievo said:
First news the current practices are based on a philosophical system approach! I'd have say it is based on common sense and that no one give 'the system approach ' a **** when it comes to medical practices. Always happy to learn something.

Again you supply your personal and unsupported reaction against an actual referenced case where people had to sit down and come up with a valid position on a fraught issue. It may indeed seem common sense to then arrive at a systems definition of consciousness here, but that rather undermines your comment, doesn't it?
 
Last edited by a moderator:
  • #42
ConradDJ said:
So the gist of my position is not that only humans have some sort of “internal life” going on in their brains. It’s that we tend to use terms like “consciousness” and “subjectivity” and “representation” to cover a vast range of very different things. If we want to understand these issues better, it’s the differences we should be focusing on.

This is precisely what people who want to debate consciousness should demonstrate a grasp of - the rich structure of the mind.

The hard problem tactic is to atomise the complexity of phenomenology. Everything gets broken down into qualia - the ineffable redness of red, the smell of a rose, etc. By atomising consciousness in this way, you lose sight of its actual complexity and organisation.

Robert Rosen called it isomerism. You can take all the molecules that make up a human body and dissolve them in a test tube. All the same material is in the tube, but is it still alive? Clearly you have successfully reduced a body to its material atoms, its components, but have lost the organisation that actually meant something.

Philosophy based on qualia-fication does the same thing. If you atomise, you no longer have a chance of seeing the complex organisation. So while some find qualia style arguments incredibly convincing, this just usually shows they have not ever really appreciated the rich complexity of the phenomenon they claim to be philosophising about.
 
  • #43
ConradDJ said:
My comments were certainly aimed at what’s specifically human in “consciousness” and “sense of self.”
Sure, which is to me the problem. Interesting post. If I may reformulate, you're saying that we should not anthropomorphise: it's not because my brain interpret an animal as having a feeling that the animal has this feeling -my brain doesn't know what's going inside, it just interprets the external signs as human like because of the way it is wired.

Fair enough. This is not attackable in any way -which is why it's the hard problem again. But when should we stop this line of thinking? By the very same logic, you don't know that I or anyone but you is conscious. You're making an analogy with yourself, and you may well be wrong to.

Of course I don't believe this solipsism, but it's exactly the same logic for chimps. So where do we stop? At this point, you would probably say: hey but humans are humans: we know that's the same kind of brain so for our species it's correct to reject solipsism.

I like this way of thinking, and would agree I don't know what it's like to be a bat. But I'd argue that both the behavior and the brain of the Chimps are so similar to the human behavior and brain that I don't see any reason to think their thought are significantly differents.

This was not obvious at Vygotsky's time , but have a look at this:

http://www.nytimes.com/2010/06/22/science/22chimp.html?_r=1
http://www.cell.com/current-biology/fulltext/S0960-9822(10)00459-8

When I see a young chimp peeing on the dead body of the one he attacked, I tend to think his subjective experience is quite similar to humans in the same situation. I may be wrong, but find the body of ethologic data that have been collected in the recent years quite convincing.

ConradDJ said:
These abilities are quite amazing in themselves, but if you talk to biologists or ethologists I doubt you’ll find many who believe there is a “hard problem” here. (...) It’s clear why these capacities evolved, it’s clear how evolution works, and there’s no mystery any more about the relationship of evolving self-reproducing organisms to the physical world.
*cough cough*. Well I don't want to argue or discuss it further, but I promise it's not the view of most of my collegues. :wink:
 
  • #44
ConradDJ said:
Before anyone begins to ask questions about the relation of our conscious experience to that of an animal, I would recommend that they spend some time thinking about the very great differences that exist between the conscious experience of humans, say, in oral cultures and in literate culture. (Look up Walter Ong’s work or Eric Havelock’s, for example.) That helps give a sense of how radically different one’s “consciousness” and “sense of self” can be, even when the underlying language has hardly changed.
Interesting. That may qualify as a demand I ask https://www.physicsforums.com/showthread.php?t=472298". If you could turn it into specific claim supported by specific evidence, that'd be interesting.
 
Last edited by a moderator:
  • #45
apeiron said:
identify the paper you believe supplies the argument.
Sure. How could I not answer such a polite demand? However, if you don't mind, I'll wait for you to start providing the experimental evidences supporting some of your past claims.

apeiron said:
Again you supply your personal and unsupported reaction against an actual referenced case
My personal reaction is the reaction of a scientitist who has worked specifically on this issue. So, to be perfectly clear, I know that at least regarding coma, your statement is bull.
 
  • #46
Lievo said:
My personal reaction is the reaction of a scientitist who has worked specifically on this issue. So, to be perfectly clear, I know that at least regarding coma, your statement is bull.

As usual, references please. And here you seem to be saying that you can even cite your own research. So why be shy?
 
  • #47
JDStupi said:
In principal, I agree that there exists a "hard problem of consciousness" for both humans and animals, because I do believe that animals have qualia and internal states...

Drastically different cultures which focus on drastically different aspects of human life will almost certainly have a completely different structuring of their internal perceptual manifold. This is relevant because, how could we hope to understand the nature of "what it is like to be a bat" from an internal perspective if we can barely understand what it is like to be in a completely different culture, or even what it is like to be the guy next door.


JD – Thanks, what you say makes sense to me. The point is not to stop people from trying to understand the mental life of chimps or bats, if for some reason they’re inclined to do that. But when people think about consciousness, I imagine it’s usually because they’re trying to understand their own – which is of course the only mentality they’ll ever actually experience. And it’s a very bad way to start by skipping over the many, many levels of attention-focusing techniques that we learn as we grow up into language – which is pretty much what human consciousness is made of – and defining the topic of study as some kind of generic “sense of self” that chimps have but maybe mollusks don’t.

There’s no absolute dividing-line between “conscious” and “non-conscious” – or “sentient” and “non-sentient” for that matter. These are highly relative terms. But I believe there are two huge leaps in our evolutionary history, with the origin of life and the origin of human communication. The problem is that we understand the first of these relatively well, and the second almost not at all.

apeiron said:
This is precisely what people who want to debate consciousness should demonstrate a grasp of - the rich structure of the mind...

Philosophy based on qualia-fication does the same thing. If you atomise, you no longer have a chance of seeing the complex organisation.


Yes. I do think the status of “qualia” is an interesting topic – but again, this is a highly relative term. As you well understand, it’s not as though there is something like a simple, “primitive” experience of “red” we can go back to and take as a pristine starting point for assembling a conscious mind.

Lievo said:
By the very same logic, you don't know that I or anyone but you is conscious. You're making an analogy with yourself, and you may well be wrong to.

Of course I don't believe this solipsism, but it's exactly the same logic for chimps. So where do we stop? At this point, you would probably say: hey but humans are humans: we know that's the same kind of brain so for our species it's correct to reject solipsism.


No, you’re not getting my point. I’m not interested in proving that solipsism is incorrect – no sensible person needs to do that. I’m just as little interested in proving that chimps are like humans, or unlike humans, both being obviously true in various ways. How can there be a “correct” answer to the question, “Do chimps have an internal mental life like ours?”

My point is that we interpret the mentality of others by projecting, and that this is a primary human capacity. If we didn’t imagine each other as people, communication in the human sense would be inconceivable. It’s obvious to me that no one imagines anyone else’s internal world “correctly” – what would that even mean? But because we automatically imagine each other as conscious beings, we open up the possibility of talking with each other even about our private, internal experience, and it’s amazing how profound the experience of communication can seem, sometimes.

But because the projective imagination is so fundamental to us, if we want to understand anything at all about consciousness and its history, we have to focus on the differences. Unless we point to specific distinctions between different ways of being "conscious", we literally don't know what we're talking about.

For example – before Descartes, I think it’s reasonable to suppose that no one in the history of human thought had ever experienced the world as divided between an “external objective reality” and a “subjective inner consciousness”... because that difference had never before been conceptualized, never brought into human language. But today any thinking person probably experiences the world that way, because the subject/object split became basic to Western thought during the 17th century, and long ago percolated into popular language. So today it takes a real stretch of imagination to glimpse what “consciousness” was like in the Renaissance... and we can only hope to do that if we can focus on landmarks like Descartes’ Meditations and take them seriously as consciousness-changing events.

I expect that to be controversial – but that’s what I mean by “focusing on the differences” in consciousness.
 
  • #48
I was going to suggest the mirror test, but I thought it would be a good idea to try it out on myself first. Step 1 is to take a good look at yourself in the mirror. Then put a mark on your forehead and then look again in the mirror. If you don't pick at your forehead, that means either that you are not self-aware, or that you are a slob. If you do pick at your forehead it means that you are self-aware but not necessarily. First you have to consider whether you always pick at your forehead. I failed the test myself, but I'm told that chimps, dolphins, and elephants have passed it.
 
Last edited:
  • #49
ConradDJ said:
Yes. I do think the status of “qualia” is an interesting topic – but again, this is a highly relative term. As you well understand, it’s not as though there is something like a simple, “primitive” experience of “red” we can go back to and take as a pristine starting point for assembling a conscious mind.

A lot of philosophy of mind is motivated by a naive atomistic definition of qualia. My general argument against this is the dynamicist's alternative. Qualia are like the whorls of turbulence that appear in a stream. Each whorl seems impressively like a distinct structure. But try to scoop the whorl out in a bucket and you soon find just how contextual that localised order was.

Plenty of people have of course reacted against the hegemony of atomistic qualia. For instance, here are two approaches that try to emphasise the contextual nature of seeing any colour.

On this new view of the origins of color vision, color is far from an arbitrary permutable labeling system. Our three-dimensional color space is steeped with links to emotions, moods, and physiological states, as well as potentially to behaviors. For example, purple regions within color space are not merely a perceptual mix of blue and red, but are also steeped in physiological, emotional and behavioral implications – in this case perhaps of a livid male ready to punch you.

http://changizi.wordpress.com/2010/...-richard-dawkins-is-your-and-my-red-the-same/

Building on these insights, this project defines qualia to be salient chunks of human experience, which are experienced as unified wholes, having a definite, individual feeling tone. Hence the study of qualia is the study of the "chunking," or meaningful structuring, of human experience. An important, and seemingly new, finding is that qualia can have complex internal structure, and in fact, are hierarchically organized, i.e., they not only can have parts, but the parts can have parts, etc. The transitive law does not hold for this "part-of" relation. For example, a note is part of a phrase, and a phrase is part of a melody, and segments at each of these three levels are perceived as salient, unified wholes, and thus as qualia in the sense of the above definition - but a single note is not (usually) perceived as a salient part of the melody.

http://cseweb.ucsd.edu/~goguen/projs/qualia.html

And also these neo-whorfian experiments show how language actively plays a part in introspective awareness, supporting the general Vygotskian point too.

Boroditsky compared the ability of English speakers and Russian speakers to distinguish between shades of blue. She picked those languages because Russian does not have a single word for blue that covers all shades of what English speakers would call blue; rather, it has classifications for lighter blues and darker blues as different as the English words yellow and orange. Her hypothesis was that Russians thus pay closer attention to shades of blue than English speakers, who lump many more shades under one name and use more vague distinctions. The experiment confirmed her hypothesis. Russian speakers could distinguish between hues of blue faster if they were called by different names in Russian. English speakers showed no increased sensitivity for the same colors. This suggests, says Boroditsky, that Russian speakers have a "psychologically active perceptual boundary where English speakers do not."

http://www.stanfordalumni.org/news/magazine/2010/mayjun/features/boroditsky.html

But clearly there is a heck of a lot more to be said about the status of qualia. It does not seem too hard to argue their contextual nature. It is in fact an "easy problem". Harder is then saying something about where this leaves the representational nature of a "contextualised qualia".

Even as a whorl in the stream, a qualitative feeling of redness is "standing for something" in a localising fashion. It is meaningfully picking out a spot in a space of possible reactions. So we have to decide whether, using the lens of Peircean semiotics, we are dealing with a sign that is iconic, indexical or symbolic. Or using Pattee's hierarchy theory distinctions, does qualiahood hinge on the rate dependent/rate independent epistemic cut?

In other words, we cannot really deal satisfactorily with qualia just in information theoretic terms. The very idea of representation (as a construction of localised bits that add up to make an image) is information based. And even if we apply a "contextual correction", pointing out the web of information that a qualia must be embedded in, we have not solved the issue in a deep way...because we have just surrounded one bit with a bunch of other bits.

What we need instead is a theory of meaning. Which is where systems approaches like semiotics and hierarchy theory come in.
 
  • #50
Goguen is a good source for a dynamicist/systems take on qualia and representations.

http://cseweb.ucsd.edu/~goguen/projs/qualia.html

You can see how this is a battle between two world views, one based on bottom-up causal explanations (which start off seeming to do so well because they make the world so simple, then eventually smack into hard problems when it becomes obvious that something has gone missing), the other based on a systems causality where there is also a global level of top-down constraint, and so causality is a packaged deal involving the interactions between local bottom-up actions, and global top-down ones.

Anyway some snippets from a Goguen paper on musical qualia which illustrates what others are currently saying (and using all the same essential concepts that I employ such as hierarchy, semiosis, anticipation, socialisation, contextuality...).

http://cseweb.ucsd.edu/~goguen/pps/mq.pdf

Musical Platonism, score nominalism, cognitivism, and modernist approaches in general, allassume the primacy of representation, and hence all ounder for similar reasons. Context is crucial to interpretation, but it is determined as part of the process of interpretation, not independently or in advance of it. Certain elements are recognized as the context of what is being interpreted,while others become part of the emergent musical \object" itself, and still others are deemed irrelevant. Moreover, the elements involved and their status can change very rapidly. Thus, every performance is uniquely situated, for both performers and listeners, in what may be very differentways.

In particular, every performance is embodied, in the sense that very particular aspectsof each participant are deeply implicated in the processes of interpretation, potentially including their auditory capabilities, clothing, companions, musical skills, prior musical experiences, implicit social beliefs (e.g., that opera is high status, or that punk challenges mainstream values), spatiallocation, etc., and certainly not excluding their reasons for being there at all (this is consistent with the cultural historical approach of Lev Vygotsky.

Most scientific studies of art are problematic for similar reasons. In particular, the third person,objective perspective of science requires a stable collection of "objects" to be used as "data,"which therefore become decontextualized, with their situatedness, embodiment, and interactive social nature ignored.

This paper draws on data from both science and phenomenology, in a spirit similar to the "neurophenomenology" of Francisco Varela, as a way to reconcile first and third person perspectives,by allowing each to impose constraints upon the other. Such approaches acknowledge that the first and third person perspectives reveal two very different domains, neither of which can be reduced to the other, but they also deny that these domains are incompatible.

Whatever approach is taken, qualia are often considered to be atomic, i.e., non-reducible, or without constituent parts, in harmony with doctrines of logical positivism, e.g., as often attributed to Wittgenstein’s Tractatus. Though I have never seen it stated quite so baldly, the theory(but perhaps "belief" is a better term, since it is so often implicit) seems to be that qualia atoms are completely independent from elements of perception and cognition, but somehow combine with them to give molecules of experience.

Peirce was an American logician concerned with problems of meaning and reference, who concluded that these are relational rather than denotational, and whoalso made an in uential distinction among modes of reference, as symbolic, indexical, or iconic...A semiotic system or semiotic theory consists of: a signature, which gives names for sorts, subsorts, and operations; some axioms; a level ordering on sorts having a maximum element called the top sort; and a priority ordering on the constructors at each level...
Axioms are constraints on the possible signs of a system. Levels express the whole-part hierarchy of complex signs, whereas priorities express the relative importance of constructors and their arguments; social issues play an important role in determining these orderings. This approach has a rich mathematical foundation...

The Anticipatory Model captures aspects of Husserl’s phenomenology of time. For example,it has versions of both retention and protention, and the right kind of relationship between them. It also implies Husserl’s pithy observation that temporal objects (i.e., salient events or qualia) arecharacterized by both duration and unity. Since it is not useful to anticipate details very far into the future, because the number of choices grows very quickly, an implementation of protention, whether natural or artificial, needs a structure to accommodate multiple, relatively short projections, basedon what is now being heard, with weights that increase with elapsed time; this is a good candidatefor implementation by a neural net of competing Hebbian cell assemblies, in both the human and algorithmic instantiations, as well as robots (as in [71]), and it also avoids reliance on old style AI representation and planning.

This paper has attempted to explore the qualitative aspects of experience using music as data, and to place this exploration in the context of some relevant philosophical, cognitive scientific, and mathematical theories. Our observations have supported certain theories and challenged others. Among those supported are Husserl’s phenomenology of time, Vygotsky’s cultural-historical approach, and Meyer’s anticipatory approach, while Chalmers’ dualism, Brentano’s thesis on intentionality, qualia realism, qualia atomism, Hume’s pointilist time, and classical cognitivism have been disconfirrmed at least in part.

In particular, embodiment, emotion, and society are certainly important parts of how real humans can be living solutions to the symbol grounding problem. The pervasive influence of cognitivism is presumably one reason why qualia in general, and emotion in particular, have been so neglected by traditional philosophy of mind, AI, linguistics, and so on. We may hope that this is now beginning to change.

This suggests that all consciousness arises through sense-making processes involving anticipation, which produce qualia as sufficiently salient chunks. Let us call this the C Hypothesis; it provides a theory for the origin and structure of consciousness. If correct, it would help to ex-plain why consciousness continues to seem so mysterious: it is because we have been looking at it through inappropriate, distorting lenses; for example, attempting to view qualia as objective facts, or to assign them some new ontological status, instead of seeing segmentation by saliency as an inevitable feature of our processes of enacted perception.
 
  • #51
ConradDJ said:
I’m just as little interested in proving that chimps are like humans, or unlike humans, both being obviously true in various ways. How can there be a “correct” answer to the question, “Do chimps have an internal mental life like ours?”
Ok let me challenge your lack of interest in defining the topic of study as some kind of generic “sense of self” that chimps have but maybe mollusks don’t. You'll agre that chimps behave in a way we can't program yet, which is a good reason to believe we miss something below the human level. My guess is that when one will be able to program something that would behave as a chimp, then it'd be not too hard to go up to human. What I think may challenge your lack of interest, is that this last thought should be shared by anyone thinking of language and culture as of primary importance to human spirit. Don't you think?

ConradDJ said:
My point is that we interpret the mentality of others by projecting, and that this is a primary human capacity.
I won't argue your line of thinking, but the premice is just not supported by the current evidences.

http://www.sciencemag.org/content/312/5782/1967.abstract

ConradDJ said:
For example – before Descartes, I think it’s reasonable to suppose that no one in the history of human thought had ever experienced the world as divided between an “external objective reality” and a “subjective inner consciousness”... because that difference had never before been conceptualized, never brought into human language.
http://en.wikipedia.org/wiki/Dualism_(philosophy_of_mind)

Ideas on mind/body dualism are presented in Hebrew Scripture (as early as Genesis 2:7) where the Creator is said to have formed the first human a living, psycho-physical fusion of mind and body--a holisitic dualism. Mind/body dualism is also seen in the writings of Zarathushtra. Plato and Aristotle deal with speculations as to the existence of an incorporeal soul that bore the faculties of intelligence and wisdom. They maintained, for different reasons, that people's "intelligence" (a faculty of the mind or soul) could not be identified with, or explained in terms of, their physical body.[2][3]
 
  • #52
apeiron said:
Qualia are like the whorls of turbulence that appear in a stream. Each whorl seems impressively like a distinct structure. But try to scoop the whorl out in a bucket and you soon find just how contextual that localised order was.

apeiron, you might be interested in this paper by Rabinovich that references experiments by Laurent and Friedrich:

Experimental observations in the olfactory
systems of locust (7) and zebrafish (8) support
such an alternative framework. Odors generate
distributed (in time and space), odor- and concentration-specific patterns of activity in principal neurons. Hence, odor representations can
be described as successions of states, or trajectories, that each correspond to one stimulus and
one concentration (9). Only when a stimulus is
sustained does its corresponding trajectory
reach a stable fixed-point attractor (10).
However, stable transients are observed
whether a stimulus is sustained or not—that is,
even when a stimulus is sufficiently short-lived
that no fixed-point attractor state is reached.
When the responses to several stimuli are compared, the distances between the trajectories
corresponding to each stimulus are greatest
during the transients, not between the fixed
points (10).

Rabinovich, M., Huerta, R., Laurent, G. (2008) Transient dynamics for neural processing. Science 321, 5885
http://biocircuits.ucsd.edu/Papers/Rabinovich08.pdf

References in paper
7. G. Laurent, H. Davidowitz, Science 265, 1872 (1994).
8. R. Friedrich, G. Laurent, Science 291, 889 (2001).
 
Last edited by a moderator:
  • #53
apeiron said:
Goguen is a good source for a dynamicist/systems take on qualia and representations.

http://cseweb.ucsd.edu/~goguen/projs/qualia.html

Apeiron -- thanks very much for this link... I have a particular interest in how we perceive melodies, partly because it helps me think about the layered structure of time. I'll look at this and perhaps we'll discuss it in another thread.
 
  • #54
Hi Lievo – responding to your #51 above...

When I said projective interpreting was a primary human capacity, I didn’t mean to imply that it’s uniquely human – though of course in some ways that’s true. But I noted in https://www.physicsforums.com/showpost.php?p=3135517&postcount=38" above that “mind-reading” at a primitive level is something we share with other mammals. I would say that all our deep emotional sensitivities are essentially mammalian. What’s uniquely human about us are the very complex ways these sensitivities evolve, in the relationships we develop by talking with each other.

To return to the topic of “consciousness” and the “sense of self” — my main point is that there is no absolute dividing-line here. I suggest we think of consciousness as the sum of all the ways we have of noticing things and paying attention to them – not only to things, but especially to each other, and to ourselves. What gives us humans a “sense of self” is that we have more or less highly developed relationships with ourselves, that we conduct through the same means we use to relate to other people... again, mostly through talking. Of course when we talk to ourselves we get to skip a lot of the articulation needed to make ourselves clear to others... but language is still there in the background as what apeiron calls the scaffolding for thought.

“Consciousness” in this sense is not a single capacity we either do or don’t share with other animals. Mice surely see and feel... whatever it is they see and feel. But what gives us humans a highly-developed sense of “living inside our own minds” is that we talk to ourselves a lot... especially today.

If you read Homer, you’ll also find Achilles talking to himself – except that it’s not described that way. He talks to his thymos... which is apparently some bodily organ. Homeric heroes never just sit there and “think” – before philosophy was invented (or its counterparts in some other cultures), human language had no way to recognize such a process.

So again, there are many, many stages we humans have evolved through, in getting to “have a sense of self.” You’re right that the mind/body distinction is ancient – but you’ll find that right through Medieval times the mind is almost never described “from inside” – it’s described as a thing, a kind of object. Augustine’s Confessions come closest to a modern “subjective” view, in his reflections on memory (not present-time consciousness). But even that is almost unique in ancient literature.

The breakthrough with Descartes is not in his “theory” about mind vs. body, but in his taking such a radically subjective view, even questioning whether the objective world is “really there.” It’s only in the latter part of the 17th century that people began to get this point of view and see the world “from inside their own heads”, so to speak. It’s only at that point that “the mind” began to be conceived as “subjective consciousness”.

What’s really interesting about this, to me, is that each one of us personally has evolved through many stages of learning to be self-aware, as we grow up. There is perhaps no ultimate end-point to this evolution... but there is a point at which we typically imagine we’ve come to the end, with “adulthood” (from Latin for “at the end”). At that point we seem to fall under an illusion which can be hard to dispel, that our way of seeing things is just “being conscious” in some neutral, absolute sense. And then we start wondering whether mice or monkeys are also “conscious”, etc.
 
Last edited by a moderator:
  • #55
Welcome Metarepresent!

Q_Goest said:
Ramachanadran takes the fairly uncontroversial stance that the overall phenomenal experience (and the many parts of that experience) is what equates to the mental representation.

All this isn’t to say that our present approach to consciousness is without difficulties as Ramachandran points out in the video. Some very serious issues that pop out of this approach include “mental causation” and the problems with downward causation (ie: strong downward causation) and the knowledge paradox. So once we understand the basic concept of mental representation as presented for example by Ramachandran, we then need to move on to what issues this particular model of the human mind gives rise to and look for ways to resolve those issues.



I'd like to hear more about the problems and how they might be resolved. I wonder if this idea regarding processing could be related to the problems and solutions.


"...cognitive neuroscientists have increasingly come to view
grapheme recognition as a process of hierarchical feature analysis (see
Grainger et al., 2008 and Dehaene et al., 2005 for reviews). As in the
original Pandemonium model (Selfridge, 1959), hierarchical feature
models posit a series of increasingly complex visual feature
representations and describe grapheme recognition as resulting
from the propagation of activation through this hierarchical network.
In the initial stages of letter processing, visual input activates
component features of the letter (line segments, curves, etc.) and
results in the partial activation of letters containing some or all of the
component features. Grapheme identification occurs over time via a
competitive activation process involving some combination of
excitatory and inhibitory connections both within the grapheme
level and between the grapheme level and other representational
levels, both bottom–up and top–down.
This Pandemonium model of letter perception is supported by a
wealth of studies on letter recognition, indicating that the number of
component features shared by a pair of letters predicts the likelihood
of those letters being confused (Geyer and DeWald, 1973). Integrating
these behavioral measures with the neuro-anatomical models of
visual perception, careful examination of the brain response to
pseudo-letters (non-letter shapes visually matched to the component
features comprising real letters) as well as infrequent and frequent
letters shows a cascading hierarchy of processing within the PTGA,
proceeding from posterior to anterior regions (Vinckier et al., 2007).
Further, ERP studies of letter processing (e.g., comparing the brain
response to letters and pseudo-letters) suggest feature-level processing
occurs before 145 ms, and letter-level processes occur thereafter
(Rey et al., 2009)."

http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6WNP-5093N4T-4&_user=10&_coverDate=10%2F15%2F2010&_rdoc=1&_fmt=high&_orig=search&_origin=search&_sort=d&_docanchor=&view=c&_rerunOrigin=scholar.google&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=5b1164f3c755cdcedf25ae5888fc31bc&searchtype=a

"Hierarchical organization
The ventral visual pathway is organized as a hierarchy of areas connected
by both feedforward and feedback pathways (see Figure I). From
posterior occipital to more anterior inferotemporal regions, the size of
the neurons’ receptive fields increases by a factor of 2–3. This is
accompanied by a systematic increase in the complexity of the neurons’
preferred features, from line segments to whole objects, and a
corresponding increase in invariance for illumination, size, or location."

http://www.ncbi.nlm.nih.gov/pubmed/15951224
 
Last edited:
  • #56
ConradDJ said:
When I said projective interpreting was a primary human capacity, I didn’t mean to imply that it’s uniquely human – though of course in some ways that’s true.

Another way of putting it is that social animals would project the best they can, we just have much more going on internally to project. We have a running, socially-formed, sense of self as something extra to project on to others (and in animistic fashion, on to chimps, mice, trees and forces of nature).

The mice experiment (how do these things get through the ethics committees?) only shows at most mice being sensitised to their own pain, not mice sympathising for the pain of others. As I say, you do have to wonder about the empathetic abilities of experimenters that can inject the mice with cramp-inducers in the first place.

To return to the topic of “consciousness” and the “sense of self” — my main point is that there is no absolute dividing-line here.

The animal sense of self would be the completely subjective emboddied form. The point about humans is that we carry around in our heads a second "objective" view of ourselves - the view that society would have of our actions and our existence. Our every emboddied response or impulse is being run through that secondary layer of ideas that is socially evolved. We are made conscious of being an agent within a collective structure. That is the big irony. The more aware of our selves we are, the more carefully we can be to play a creative part of that larger state of being.

And this irony is also why Vygotsky (a marxist!) has been so strongly resisted within anglo-saxon culture. The anglo-saxon mythology about free individuals (enchained by the arbitrary conventions of their societies) virtually demands that the essence of a person is biological. If they cannot be free in the way of a soul, then genetic determinism is preferred to social.

If you read Homer, you’ll also find Achilles talking to himself – except that it’s not described that way. He talks to his thymos... which is apparently some bodily organ. Homeric heroes never just sit there and “think” – before philosophy was invented (or its counterparts in some other cultures), human language had no way to recognize such a process
.

You have read Jayne's curious success, the Bicameral Mind then.

A very good book tracing the development of the language of mind, and so concepts about what the mind is, is Kurt Danzinger's Naming the Mind.
http://www.kurtdanziger.com/Book 2.htm

What’s really interesting about this, to me, is that each one of us personally has evolved through many stages of learning to be self-aware, as we grow up.

What really shocked me when I first started out was how poor my introspective abilities actually were. It took years of practice (and good theories about what to expect to find) to really feel I was catching the detail of what actually took place.

It is the same as having a trained eye in any field of course. Someone who has never played football will just have a rushing impression of a bustle on the pitch. A good player will see an intricately organised tapestry.

Introspection is a learned skill. And society only wants to teach us to the level which suits its purposes. Another irony of the story. To go further, you have to get actually "selfish" and study psychophysics, Vygotskian psychology, social history, etc.
 
  • #57
fuzzyfelt said:
I'd like to hear more about the problems and how they might be resolved. I wonder if this idea regarding processing could be related to the problems and solutions.

There is no way of doing neuroscience without adopting a hierarchical view of processing. Or a belief in top-down action. Every cortical neuron has far more top-down feedback synapses than bottom-up input ones modulating its actions.

The debate really is over how to model this complexity. Some go towards computational models (such as Grossberg's ART). Others go towards dynamicist models - of the kind that Pythagorean cited.
 
  • #58
apeiron said:
There is no way of doing neuroscience without adopting a hierarchical view of processing. Or a belief in top-down action. Every cortical neuron has far more top-down feedback synapses than bottom-up input ones modulating its actions.

The debate really is over how to model this complexity. Some go towards computational models (such as Grossberg's ART). Others go towards dynamicist models - of the kind that Pythagorean cited.

They're not fundamentally mutually exclusive. I can quite easily (in principle) model an ART network with dynamical biophysical neurons. The only real difference is I'd be replacing postdictive states (1-0) with a predictive dynamical system that describes the biophysical mechanisms. So now we have a spectrum of states from 0 to 1 that depend on stimuli (and the history of stimuli) to the system. And of course the complexity grows with the number of neurons, and you eventually require supercomputers (or lots of patience).
 
  • #59
Pythagorean said:
They're not fundamentally mutually exclusive.

No of course. In fact that is what led me to a causal perspective that sees them as complementary extremes.

It seems obvious that both are involved, but then the next thing is how do you model that in practice. The Laurent work you cited argues that dynamical networks can cycle linearly through a "channel" of metastable states. So you get a computational looking outcome - a liquid state machine.

Personally I don't think that the approach is significant. It may indeed turn out to be a level of mechanism that brain circuitry exploits. However I am talking about a more general accomodation between dynamism and computationalism. A model at a more abstract level.

This is why for example I emphasise Pattee's work on the epistemic cut, or Peircean semiotics. But I also agree that it is dynamism/computationalism across all levels. Or analog~digital, continuous~discrete, rate-dependent~rate-independent, or however one chooses to describe it.
 
  • #60
apeiron said:
Personally I don't think that the approach is significant.

It may not be in the general context of qualia, but I think the experimental observations directly support a statement like:

apeiron said:
Qualia are like the whorls of turbulence that appear in a stream. Each whorl seems impressively like a distinct structure. But try to scoop the whorl out in a bucket and you soon find just how contextual that localised order was.

It demonstrates how the appearance of transient events ("whorls of turbulence") in the "stream" of information can be correlated with perception events in the behavior of an organism (i.e. qualia presumably).
 
  • #61
apeiron said:
You have read Jayne's curious success, the Bicameral Mind then.


Yes, along with Onian's Origins of European Thought, Snell's Discovery of the Mind, etc. -- relics of my academic years (which are by now also ancient history). But thank you for the reference to Danziger, I'll put it on my want-list.

I still like Jaynes' book, which has an interesting take on what we mean by "consciousness". He points out that we can do most everything we do without being conscious of it at all... like driving all the way through town to work, while thinking about something else, oblivious to what you're passing on the road, yet all the while maneuvering around potholes, etc. We only really need to be conscious when we're dealing with new and challenging situations, he suggests. And he goes on to ask how people might have dealt with such situations before there was the kind of developed “sense of self” that came with philosophy and the emergence of “self-reflective” internal dialogue.

This was a fine effort to imagine a really different kind of consciousness. The odd thing is that he’s dealing with the period of transition from oral to written culture, but missed the significance of that change. It's so difficult not to take for granted the basic tools we ourselves use in being conscious, like the ability to record spoken language.

apeiron said:
What really shocked me when I first started out was how poor my introspective abilities actually were. It took years of practice (and good theories about what to expect to find) to really feel I was catching the detail of what actually took place.


Yes, you’re right – and of course, all the deep layers of consciousness that we evolved when we were young have long been covered up by the more sophisticated and more verbal layers we’ve built on top of them. We have no memory at all of our earliest years, since the neural structures to support conscious recollection we still undeveloped.
 
  • #62
apeiron said:
The animal sense of self would be the completely subjective emboddied form. The point about humans is that we carry around in our heads a second "objective" view of ourselves - the view that society would have of our actions and our existence. Our every emboddied response or impulse is being run through that secondary layer of ideas that is socially evolved.

Yes, I think this gets at what's essentially distinctive about human consciousness -- that we're operating mentally on two planes. The one plane is what we share with other animals, a highly evolved interactive connection to the world in present time. The other is a highly evolved projection of objective reality that we learn to construct and maintain in our heads as we learn to talk -- a projection that goes far beyond the "here and now", to include things that happened hundreds of years ago and things we imagine may happen far in the future, and things that may be going on now in distant countries, etc.

Human "language" is not merely a matter of words and grammar. In essence it's a software technology that has two primary functions -- (1) creating and maintaining this projection of the world from a standpoint "outside of time", as a perspective from which we experience and interpret our real-time interaction. And (2) communicating itself and its projections from one human brain to another, as its means of reproducing itself.

If we try to understand the differences between humans and other animals in terms of what we and they can or can't do, we can find primitive versions of most things human. Because we don't have clear definitions of "consciousness", "self", "introspection", "thought", "memory" or "language", etc. we can find ways to use all these terms for what animals do, if we want to.

But it seems to me the key is this software-layer of consciousness that has evolved by reproducing itself from brain to brain, and has evolved more and more complex ways of ensuring its own reproduction. You and I, as "conscious minds", are essentially run-time constructs of this software, running on the neural hardware of our brains.

So what's different between us and other animals isn't so much what we do as how we do it. What's unique about us is this software that gets installed in each of us in the first few years of our lives. At some point early on in human evolution, this symbiotic relationship became so vital to our biological survival that our brains and bodies began adapting very rapidly to support it -- including not only changes in the brain, but a great lengthening of the period in which human children are helplessly dependent on adults and deeply attached to them at an emotional level.
 
  • #63
ConradDJ said:
Yes, I think this gets at what's essentially distinctive about human consciousness -- that we're operating mentally on two planes.

So when it comes to the OP, there are three levels of selfhood. :-p

1) animal level is BEING a self.
2) human level is KNOWING you are BEING a self.
3) Vygotskean level is KNOWING that you KNOW you are BEING a self.

It is meta-metarepresentation.
 
  • #64
apeiron said:
So when it comes to the OP, there are three levels of selfhood. :-p

1) animal level is BEING a self.
2) human level is KNOWING you are BEING a self.
3) Vygotskean level is KNOWING that you KNOW you are BEING a self.

It is meta-metarepresentation.

So, to sum, I take it #2 is representing representations or manipulating representations of representations.

This is possible among many animals. It is possible among animals which may not communicate so specifically, e.g. a lion can weigh a visual representation of an amount of defenders against an aural representation of an amount of opponents, and will determine whether to act or not to act (Dehaene). Such abstractions are also performed by pre-verbal 4-6 month human infants (Dehaene, again). And then, it is demonstrated in the Richard Attenborough BBC doc, linked elsewhere, that monkeys who produce and hear specific calls representing specific threats which are confirmed by visual representations. Further, these representations may be manipulated for deception.

So, what is the actual distinction between # 2 and #3 in which language provides a uniquely human consciousness?
 
  • #65
fuzzyfelt said:
So, what is the actual distinction between # 2 and #3 in which language provides a uniquely human consciousness?

That was a joke :smile:. Sorry if it was not obvious.

Though 3 would indeed be an example of socially constructed knowledge that takes individual self-awareness to a higher level.

BTW I would not use the term representation as it is a strictly computational concept - information as input that gets displayed. The brain does not work like that. (Cue the usual chorus...)
 
  • #66
Yes, I was enjoying indulging in manipulations of metas, too :smile:. However, I don't see reason to take this last necessarily as a causal connection, if there is anything to these last ideas at all.
 
Last edited:
  • #68
Would a person with absolutely no memory have self?
 
  • #69
Absolutely none? Of course not.

An amnesiac? Yes, but the self wouldn't be allowed to evolve due to hippocampal damage. The hippocampus "writes" to your neocortex (especially while you sleep).
 
  • #70
Tregg Smith said:
Would a person with absolutely no memory have self?

You can't really have a brain and not have "memory" - a neural organisation with a specific developmental history.

But there is evidence for what happens when for example the hippocampus is destroyed and people can't form fresh autobiographical memories - store new experiences.

See the celebrated case of Clive Wearing.
http://en.wikipedia.org/wiki/Clive_Wearing
 
Back
Top