Are semantic representations innate?

  • Thread starter Thread starter Q_Goest
  • Start date Start date
  • Tags Tags
    Representations
AI Thread Summary
Hilary Putnam critiques Chomsky's theory of innate Universal Grammar, arguing that semantic representations in the mind may develop from experience rather than being innate. The discussion explores whether meaning is universally inherent or shaped by individual experiences, with some participants suggesting that basic semantic concepts could emerge from evolutionary communication needs. The distinction between meaning and information is highlighted, emphasizing that meaning requires cognitive interpretation, while information exists independently. The conversation also touches on the implications of understanding in both human and non-human interactions, questioning the necessity of shared meaning for effective communication. Overall, the debate centers on the origins and nature of semantic representations in the mind.
Q_Goest
Science Advisor
Messages
3,012
Reaction score
42
In his book, "Representation and Reality" Hilary Putnam writes about Chomsky:
Chomsky is famous for having proposed a theory according to which grammar is "innate" in the mind. According to Chomsky, there is a Universal Grammar - a structure and a set of categories which are universal, and not just because human environments are in certain respects all alike, but because this Universal Grammar is built into the basic structure of the mind itself. (pg 5)
...
A Chomskian theory of the semantic level will say that there are "semantic representations" in the mind/brain; that these are innate and universal; and that all our concepts are decomposable into such semantic representations. This is the theory I hope to destroy. (pg 5)

…if semantic representations in the brain are developed from experience, just as words in a public language are, rather than being composed out of an innate set of semantical primitives, there is no reason to think that a given representation (described syntactically) will not come to be given different meanings by different grous of human beings. (“different meanings” by the criteria used by a good interpreter, this means.) (pg 16)


So what do you think? Are there ‘semantic representations’ in the mind that are innate and universal? Or would you go along with Putnam? If they are not innate/universal, then how do you think meaning gets created in the mind?
 
Physics news on Phys.org
Interesting question.

I haven't read that book, nor do I focus on the human brain but in despite of that this question tangets the quest for unification of interactions in physics, in particular when associating interaction ~ communication.

How does two interacting parts, learn to "understand each other"?

I don't find the question applied only to the human level, to be clear enough to worth commenting on, but if one generalizes the question to ask if there is an universal semantics that is the basis for how parts of the universe (here I picture the entire range of observers elementary particles, molecules, cells, simple organisms, and of course humans included) learn to "understand" each other?

My personal take on that is the evolutionary perspective, where systems that has evolved, have been selected for the capability to communicate. Those who fail, are not favoured.

I think some basic semantic could be something as simple as good and bad, "I like" or "I don't like" Basically a boolean state. But from the point of view of physical interactions I think of this in terms of "constructive" and "desctructive", referring to the self.

Meaning that I personally think the semantics is first somewhat subjective and system dependent, but still the classification of the semantics may be "semi-universal" in the sense that the evolution implied by the mutual communication, implies a "selection" for mutual understanding and thus an emergent universal semantics.

/Fredrik
 
is this related to Pinkers idea of an inborn language module?
 
Fra said:
… this question tangets the quest for unification of interactions in physics, in particular when associating interaction ~ communication.

How does two interacting parts, learn to "understand each other"?

I don't find the question applied only to the human level, to be clear enough to worth commenting on, but if one generalizes the question to ask if there is an universal semantics that is the basis for how parts of the universe (here I picture the entire range of observers elementary particles, molecules, cells, simple organisms, and of course humans included) learn to "understand" each other?

Yes, I see your tangent. One could however simply suggest that things such as molecules, cells and simple organisms don’t NEED to actually “understand” each other to interact in some predetermined way. For humans however, we ‘understand’ each other if the meaning in our heads corresponds to a shared meaning “by the criteria used by a good interpreter” so to speak. In contrast, we wouldn’t generally speak of elementary particles, through simple organisms understanding each other. If there is no ‘meaning’ in the mind of a particle, there certainly can’t be any semantic representation.

Fra said:
My personal take on that is the evolutionary perspective, where systems that has evolved, have been selected for the capability to communicate. Those who fail, are not favoured.
But again, why would simply being able to communicate (in the sense of being able to interact) require understanding? If we go back into evolutionary history, and suggest the ability to communicate (in the sense that communication is shared meaning) is favoured, then don’t you need to claim that such single cell organisms have this thing we call ‘meaning’? I’d rather steer away from this line of thought as the focus here is ‘semantic representation’ which is to say there is some kind of meaningful representation (of something real or imaginary) in a person’s mind.

Fra said:
I think some basic semantic could be something as simple as good and bad, "I like" or "I don't like" Basically a boolean state. But from the point of view of physical interactions I think of this in terms of "constructive" and "desctructive", referring to the self.
This is a great point! I believe you’re saying that the “I like” or the “I don’t like” is an example of a very basic building block used to create ‘meaning’ or a semantic representation. I don’t disagree, though I might also add that even this ‘building block’ can be further broken down into semantic representations such as “I” which has a meaning of self, and “like” could mean various things such as “constructive” as you say, but also “favorable” or something similar. Chomsky might say such very basic building blocks are primitive (ie: of a fundamental nature) or even that such building blocks are innate. So I think what Chomsky is saying then is that ALL semantic representations are innate. However, one would then need to explain other semantic representations as being innate, such as “bath tub” which one might point out, could mean something which can hold water and is large enough for a human body to lie in. I believe Searle actually used this as an example. Regardless, the meaning (or semantic representation, if you will) of “bathtub” might also be looked at as being socially or culturally situated. For example, ask someone who’s never heard of a bathtub before what such a thing is, and in order to explain what it is, we will need not just some basic primitives (such as “water” and “capable of holding” and “human body”) but we might also need to explain that such things as bathtubs hold social meanings such as a respect for cleanliness, making it difficult or impossible to reduce such semantic representations to innate primitives.

Fra said:
Meaning that I personally think the semantics is first somewhat subjective and system dependent, but still the classification of the semantics may be "semi-universal" in the sense that the evolution implied by the mutual communication, implies a "selection" for mutual understanding and thus an emergent universal semantics.
So are you saying that all semantic representations are innate? Sounds like you’re suggesting that in order for mutual communication to occur, there must be universal semantics.
 
I guess what I’m driving at here, is ‘how is meaning created in our heads’? If we try and reduce everything to some primitive semantic representation, then where do these primitives come from? And what are they? Certainly, they don’t seem to appear in any physics I know of. There is no ‘meaning’ no semantic representation that can be pointed at in nature, any more than we can point to emotions such as love or hate. We pick up a book and read a story, and it all has meaning inside our heads. The letters on the page have no meaning except when being read by a person with a mind into which the meaning of the words somehow arise in the hardware of the brain. Nevertheless, the meaning of the words is still colored by our own experiences, our own social and cultural background.
 
granpa said:
is this related to Pinkers idea of an inborn language module?
I'm not familiar enough with Pinkers. Do you have a good link?
 
the closest thing an nature to meaning would be information. so what is the difference between meaning and information?
 
just look up 'the language instinct'

computers know 'how' to process information but they don't know 'what' they are doing. for information to have meaning you must know 'what' you are doing. 'why' is perhaps optional.
 
granpa said:
the closest thing an nature to meaning would be information. so what is the difference between meaning and information?
Information is simply the symbolic representation of something, such as the distance from here to there might be 3 miles. Information is available and exists without someone having to be aware of it, just as there is information in a book or in a computer.

Meaning is the semantic representation of something in the mind, such as what we have in our heads when we say it is 3 miles (ex: I can't get there it time, it's 3 miles away! (or) It'll only be a minute, we're only 3 miles away.) In each case, we have something in our mind that represents 3 miles, and it is generally contingent on other things.
 
  • #10
granpa said:
just look up 'the language instinct'.
Thanks. I'll take a look.

granpa said:
computers know 'how' to process information but they don't know 'what' they are doing. for information to have meaning you must know 'what' you are doing. 'why' is perhaps optional.
Right. Notice that when you say "computers know 'how' to process information" you are using a colloquialism. The computer doesn't actually know anything. It isn't even calculating numbers! It isn't doing anything other than having a bunch of interactions which can be interpretable by a human as processing information or calculating numbers. Just as you imply, it is the interpretation of what the computer does inside a person's mind that has meaning.

I totally agree "why" is optional.
 
  • #11
I'm going to be unable to access the internet for most of the rest of the week so I want to avoid entering into detailed discussions so this is some quick response only.

Q_Goest said:
Yes, I see your tangent. One could however simply suggest that things such as molecules, cells and simple organisms don’t NEED to actually “understand” each other to interact in some predetermined way. For humans however, we ‘understand’ each other if the meaning in our heads corresponds to a shared meaning “by the criteria used by a good interpreter” so to speak. In contrast, we wouldn’t generally speak of elementary particles, through simple organisms understanding each other. If there is no ‘meaning’ in the mind of a particle, there certainly can’t be any semantic representation.

I am not sure I understand your abstraction here. It seems to me you are somehow referring to the "utility" of understanding? Is there an utility to a particle of understanding it's environment? In my view of things there is. The very existence of this "particle" is in my opinion, possibly, a manifestation of success (survival).

Q_Goest said:
But again, why would simply being able to communicate (in the sense of being able to interact) require understanding? If we go back into evolutionary history, and suggest the ability to communicate (in the sense that communication is shared meaning) is favoured, then don’t you need to claim that such single cell organisms have this thing we call ‘meaning’?

Q_Goest said:
This is a great point! I believe you’re saying that the “I like” or the “I don’t like” is an example of a very basic building block used to create ‘meaning’ or a semantic representation. I don’t disagree, though I might also add that even this ‘building block’ can be further broken down into semantic representations such as “I” which has a meaning of self, and “like” could mean various things such as “constructive” as you say, but also “favorable” or something similar.

Yes that's how I mean. I see your point about about further decomposition, but I consider the "I" implicit, and thus redundant (I could have left it out) We could just start with "like" or "not like" - the "I" is implicitly the one expressing this (this is the observer, from my point of natural philosophy)

If we consider the "like" stuff is self-reinforcing, the "not like" is self-destructive. Thus we are more likely to see mutual "like" understanding evolve, than mutual conflicts, since the conflicts are self-destructive.

Q_Goest said:
So are you saying that all semantic representations are innate? Sounds like you’re suggesting that in order for mutual communication to occur, there must be universal semantics.

I guess your original intention was to keep the discussion at the level of "human philosophy", and I am seeing this from the perspective of "natural philosopy", and my way of relating to this is that humans is certainly part of nature.

To clarify what I mean, I do not believe in universal semantics in the fundamental sense, but I do think that there is a selection for a common semantics in any local environment. This suggests that the communication are expected to evolve (spontaneously) to a state of mutual understanding (defined as all parties "like it"). This is a correspondence of equilibrium.

I think of it so that, one subject, can only probe the semantics of another subject in one way, and that is interacting with it, and then build up expectations of what the consequences or feedback is. The execptations that whos feedback is "I like" is preserved :) If not, the expectation is deformed. A lifeform that survives in an environment can be thought of as "liking the environment" and "beeing liked by the environment". Understanding might be a communication level where one has "like"-"like".

I can also see this as a kind of consistency from the point of view of logic.

"like" ~ consistency
"not like" ~ inconsistecy

An equilibrium conditions requires consistency. Inconsistency suggest that the current situation is unstable, and there would be a selection for fluctuations in the direction of increasing consistency.

I would say this: I don't know if there is universal semantics (which means exactly what it says; it does not mean that there isn't one, it means I don't _know_ there is one). However, I also know that all my actions are based upon what I do know. Let's call this point of view and behaviour somewhat "rational". Then, my best bet is that all other things in my environment is also behaving somewhat rational. Again this means what is says(it does not mean my environment IS rational, it means that I ACT (as an act of gambling) as if they are likely rational). Then the result of this is probably emergent consistency.

The abstraction here, are I think successful to a wide number of cases.

But as some practical levle, I do think that actual humans (here I do not consider general cases, like particles) do have some "for all practicla purposes" innate semantics that boils to some level of self-preservation, "like" or "not like", "consistency" or "inconsistency" from which more complex semantics, and behaviour can be built.

/Fredrik
 
  • #12
There are a couple of studies that I know of that suggest that our language influences our mental representations. In the Guugu Yimithirr language (of some Australian aboriginals), spatial information is encoded linguistically in absolute terms, such as East and West, and not in terms relative to the body, such as left and right, which is common in Indo-European languages. A researcher named Levinson (1997) wanted to know if the linguistic markers carried over into non-linguistic cognitive tasks. Two groups (Dutch speakers and Guugu Yimithirr speakers) were asked to view a linear arrangement of three objects on a table. The participants were then taken into a second room and seated at a table facing the opposite direction. They were asked to arrange an identical set of objects just as they had been in the first room. Most Dutch speakers arranged them from left to right, relative to the body, but the Guugu Yimithirr speakers arranged them in absolute positions, where they had been with respect to East and West.

There was another study on object similarity done between Yucatec Mayan speakers and English speakers. In English, concrete nouns tend to carry information about object shape (candle suggests something long and thin) but in Yucatec Mayan, the emphasis is on the material the object is made of (candle suggests something made of wax) and the shape of the object has to be added as a modifier. A researcher named Lucy (1992) wanted to know if these differences would carry over into non-linguistic similarity judgments. Lucy presented the participants with a "pivot object" (such as a U-shaped object made of clay) and two objects that the participants had to choose as being more similar to the pivot. One matched in material (for instance, a clay S shaped object) and one matched in shape (for instance, a plastic U shaped object). Almost all English speakers chose based on shape, and almost all Yucatec speakers chose based on material.
 
Last edited:
  • #13
Hi Fra,
I'm sorry, but you confused me, and I don't know what to say... :frown:

Hi Math,
Interesting line of thought. Similarly, studies done on http://news.bbc.co.uk/2/hi/americas/3582794.stm" indicated they had no conceptual meaning for numbers greater than about 3.

Does this mean that our language influences our mental representation? Or does this say that social and cultural customs shapes the mental representation directly and the language simply follows suit like a shadow (or the back legs of a dog as he climbs the stairs <lol>)?

Take the Guugu Yimithirr language as an example. If we found a subset of English speaking individuals who always thought in terms of north/south/east/west instead of right and left, and we found this subset difference was due to a common cultural background for this particular group within the English speaking countries, then wouldn’t that indicate that the common culture and not the language affected ‘meaning’? Truck drivers from my experience, always talk in terms of north/south/east/west as opposed to right/left. This for the obvious reasons.

For the case of Yucatec Mayan, is it the language that emphasizes material over structure or is it the culture?

For the case of the Piraha, I seem to remember that some of their children were found to be able to pick up mathematics and larger counting numbers with ease, whereas the elders had a difficult time at it. I guess I see this as another instance of the culture being the real driver for semantic representation as opposed to the language.

I’d be interested in your thoughts.

But to take this one step further (and to narrow down where I wanted to go with this thread) it seems to me that language isn’t necessary for meaning. I have to believe animals that have no language can still have some kind of semantic representation within their mind - some kind of meaningful representation in their mind that corresponds to the world they experience. So the more fundamental question I’m trying to find an answer to is, is there a scientific or philosophical theory of how meaning is formed within the mind? Something that goes beyond a language used to create meaning. Since all languages are inevitably created by humans to describe their experiences, and inevitably (in my view) languages are built around the experiences of a given culture, all languages inevitably reflect the experiences of that culture. But meaning is more than just language. There must be something more fundamental which gives rise to meaning within the human brain than the language itself. Doesn’t meaning have to exist before a language can be created to describe it?
 
Last edited by a moderator:
  • #14
Q_Goest said:
Hi Math,
Interesting line of thought. Similarly, studies done on http://news.bbc.co.uk/2/hi/americas/3582794.stm" indicated they had no conceptual meaning for numbers greater than about 3.
Thanks for sharing that. I thought I had heard of these people but I was thinking of a society with a similar system, the Gidjingarli (Australian aboriginals, I believe). In their language, they have only four number terms: one, two, one-two, and two-two. They also have a term that means "big mob". I suppose they use as many numbers as they need. It seems odd, though, that they would not develop more precision in the system for trading, even if it is amongst themselves and not with other tribes.
Q_Goest said:
Does this mean that our language influences our mental representation? Or does this say that social and cultural customs shapes the mental representation directly and the language simply follows suit like a shadow (or the back legs of a dog as he climbs the stairs <lol>)?

Take the Guugu Yimithirr language as an example. If we found a subset of English speaking individuals who always thought in terms of north/south/east/west instead of right and left, and we found this subset difference was due to a common cultural background for this particular group within the English speaking countries, then wouldn’t that indicate that the common culture and not the language affected ‘meaning’? Truck drivers from my experience, always talk in terms of north/south/east/west as opposed to right/left. This for the obvious reasons.

For the case of Yucatec Mayan, is it the language that emphasizes material over structure or is it the culture?

I've been reading about a fellow named Vygotsky who is considered the grandfather of sociocultural theories of development. His big idea was that social interaction was the causal mechanism for all the higher psychological processes (as he described, the ones that separate us from animals). At first I thought that was quite boring – of course children learn from social interaction with other people! But he was also saying that our behavior and cognitive development were shaped by the "cultural tools" of our society. He describes these as technical tools (like plows and forks, etc) and also "psychological tools" that help us with our thinking (like language, maps, calculators, calendars, abacuses, mathematical symbol systems, etc.) His idea was that we internalize these psychological tools. For instance, children who are trained to do math on abacuses make abacus-type errors when doing the problems in their heads because they have integrated the abacus as a mental construct.

His view was that language was a cultural tool, and the most important psychological tool – and back to what you said, this tool should develop to suit the needs of the culture as certainly as a society of soup eaters would develop spoons as technical tools rather than forks.

As far as truck drivers, that would make such an interesting study! I tend to think that left and right force of habit would override the way of thinking they use on the job (since they learned this system later and only use it in particular contexts), but who knows. I think at the very least you might see a difference in reaction time if they have any interference between strategies.

For the Yucatec Mayan people, it seems to be an open question as to whether that have a special value of an object's material. Presumably they do (or did at some point), and this is why it showed up in the language and why it comes up in similarity judgments. On the other hand, they could ask why shape is such a big deal to English speakers for classification. In any case, it doesn't seem to be something that is explicitly taught. I think that someone who is learning Yucatec as a foreign language would probably get these explicit rules during instruction, but kids just pick it up, and develop their mental concepts around it.

For the case of the Piraha, I seem to remember that some of their children were found to be able to pick up mathematics and larger counting numbers with ease, whereas the elders had a difficult time at it. I guess I see this as another instance of the culture being the real driver for semantic representation as opposed to the language.
I’d be interested in your thoughts.
With kids, it could have a lot to do with opportunity and biology. I think the quick pick-up has a lot to do with the overabundance of neural synapses they possesses relative to the adults. When infants and children are developing, they have these rapid periods of synaptogenesis that are followed by long pruning periods. The number of connections in a toddler's brain are far greater than those in an adult brain. Some people argue that this is why young children can pick up foreign languages more easily than adult learners. The wiring is all there ready and waiting, whereas with adults, that wiring might have gotten dedicated to other functions, or the connections simply "died off" due to disuse and have to be reformed. There also appear to be these "sensitive periods" in which stimulation has to be there in order for a specific capability to be acquired. An infant raised in a dark cave for the first year, for example, is never going to develop normal vision.
But to take this one step further (and to narrow down where I wanted to go with this thread) it seems to me that language isn’t necessary for meaning. I have to believe animals that have no language can still have some kind of semantic representation within their mind - some kind of meaningful representation in their mind that corresponds to the world they experience. So the more fundamental question I’m trying to find an answer to is, is there a scientific or philosophical theory of how meaning is formed within the mind? Something that goes beyond a language used to create meaning. Since all languages are inevitably created by humans to describe their experiences, and inevitably (in my view) languages are built around the experiences of a given culture, all languages inevitably reflect the experiences of that culture. But meaning is more than just language. There must be something more fundamental which gives rise to meaning within the human brain than the language itself. Doesn’t meaning have to exist before a language can be created to describe it?


Jean Piaget thought it all started with reflexes. When babies come into the world they have a few basic reflexes like sucking and grasping. Piaget thought the first month of life was devoted to simple reflex modification. They modify their sucking to get more milk, for example. Simple stimulus and response. As they mature, it gets more interesting – they accidentally raise a hand to the mouth (maybe it flies up there due to a startle reflex) and hmmm.. this is something to suck on.. it's very satisfying while waiting for mom. As they mature more, they start to notice effects of their actions on the world – ex. kicking the side of the crib makes that jingly toy perched on top make more jingles - interesting. It begins rudimentary cause and effect learning.

Piaget thought that infants were incapable of actual mental representations until they were about 8 months old. Here's why: he noticed that if he showed his 7 month old an object he would reach for it (he did a lot of experiments on his own kids) but if he covered it up, the baby would stop looking. He reasoned that this means that for the baby, the object ceased to exist. (The infant lacked "object permanence"). This was because he thought the infant could not yet form mental representations – out of sight, out of mind. At 8 months, kids will begin to search for the object – but they make a lot of errors, such as searching for an object in the last place it was hidden instead of a new place.

But there have been a lot of experiments since Piaget's time that suggest infants as young as 4 months old do make mental representations of objects and that infants as young as 3 ½ months have understanding of basic physical laws and possibilities. These were conducted by the "core knowledge" theorists (beginning in the 1990s) who suggest that infants come into the world with certain basic ideas and domain specific knowledge (expectations) that they have inherited through natural selection.

I can talk more about this in another post if you want, but I fear I am becoming boring (and I am sleepy). I think that all the good developmental theorists have models for how the external world becomes internalized as concepts in a child. But certainly, I think most agree that meaning exists before language. Even a pre-verbal child's pointing at what it wants suggests that.
 
Last edited by a moderator:
  • #15
Math Is Hard said:
There are a couple of studies that I know of that suggest that our language influences our mental representations.

Isn't this the http://en.wikipedia.org/wiki/Sapir%E2%80%93Whorf_hypothesis" ?
 
Last edited by a moderator:
  • #16
CaptainQuasar said:
Isn't this the http://en.wikipedia.org/wiki/Sapir%E2%80%93Whorf_hypothesis" ?


Yes. Both studies I mentioned are supportive of the linguistic relativity hypothesis.
 
Last edited by a moderator:
  • #17
Hi math. Excellent write up. Thanks. I have no doubt we’re both on the same page, I agree with everything you’ve said. In fact, the discussion on child development is a perfect lead in.

I’m changing gears and getting away from the linguistic side of meaning now! Please have a glass of wine and think of this as a late night discussion at a pub as opposed to a debate or well thought through presentation… it ain’t that. If it’s not a fun discussion, I’ve failed. :-p

Children (babies) can have some kind of meaning in their head. A pain in the tummy might mean hunger and the need for milk, or the sensation of chafing might mean poop and they need their diaper changed. To resolve the need, they can cry which means that mommy will come to help. The use of the term ‘mean’ here is some type of thought in their mind which indicates the experience had by the baby corresponding to something, such as a concept, if in fact babies are capable of concepts. Certainly I can grant a baby the ability to form a concept to some degree; commesurate with their experience level. This isn’t to say the baby understands what milk is or poop is, but there is an experience of something, the experience is generated by qualia or sensations of the world, and that experience, formed by the qualia, has meaning.

Note that I’m using the term qualia here in a more strict sense than some. People sometimes use the term qualia to mean experience or any part of an experience. I’d like to use the more specific meaning which are those specific experiences obtained from the various individual bodily senses. More in a moment.

Back to the baby… which came first for the 4 month old, the meaning/concept or the qualia based experience? I don’t think it’s too radical to suggest that qualia came first. Perhaps Chomsky might suggest that semantic representations are innate (ie: meanings are innate to the human mind) and thus the concept or meaning was available for discovery prior to the experience of it, but I think that’s a bit off. I would however conceed that there must be something innate in the human mind to allow meanings to form.

What about qualia – perhaps qualia are the innate building blocks that are used to form meaning in the mind? Chalmers in his book “The Conscious Mind”, lists various “experiences” which he catagorizes. This list is his, with my thoughts in paren’s.
1. Visual experiences (qualia of seeing)
2. Auditory experiences (qualia of hearing)
3. Tactile experiences (qualia of touching)
4. Olfactory experiences (qualia of smelling)
5. Taste experiences (combined qualia of smelling and sensors on the tounge including sweet, sour, bitter, temperature, texture, etc)
6. Hot and Cold (qualia from skin nerve cells)
7. Pain (different qualia from skin nerve cells)
8. Other bodily sensations (other qualia… won’t go there! lol)
9. Mental imagery (formed by all prior listed qualia. Has meaning)
10. Conscious thought (formed by all prior listed qualia. Has meaning)
11. Emotions (emotions are qualia but also have meaning)
12. The sense of self (feelings such as this are qualia and can have meaning)

This list isn’t intended to be all-encompassing, but at least it is representative of the various types of experiences that can be had, per Chalmers.

One through eight above are, or can be, defined as what I’d call ‘pure qualia’. They don’t need to be ‘experiences’ in the sense of having meaning by themselves. You might imagine the experience of seeing red for example, but that experience not having any meaning. Similarly, you might hear a high pitched whining noise (qualia) without having meaning apart from the experience of the noise. Similarly, all experiences 1 though 8 can be had by a 4 month old baby for example, before these qualia can be related to some kind of meaning (such as the experience of a fire truck with it’s siren going as it speeds by is created by the experience of red and the high pitched whining noise along with some structural representation forming a mental image that has meaning).

For 9 through 12, these are generally constructs within the human mind which use qualia but they also equate to something. Mental imagery (such as the memory of a past event or combined experience of all qualia creating a present event) generally posses meaning, as does #10, conscious thought. Emotions such as ‘fear’ or ‘excitement’ are sensations of qualia that have meaning, and perhaps these emotions are primative and evolved prior to any linguistic abilities.

Would it be reasonable to suggest that meaning is created by qualia, and it is the qualia which are the fundamental building blocks of ‘meaning’?

Your discussion about babies is a perfect example I think of how babies experience ‘pure’ qualia, without the ability to relate it to meaning. Take for example, what you say here:
Piaget thought that infants were incapable of actual mental representations until they were about 8 months old. Here's why: he noticed that if he showed his 7 month old an object he would reach for it (he did a lot of experiments on his own kids) but if he covered it up, the baby would stop looking. He reasoned that this means that for the baby, the object ceased to exist. (The infant lacked "object permanence"). This was because he thought the infant could not yet form mental representations – out of sight, out of mind. At 8 months, kids will begin to search for the object – but they make a lot of errors, such as searching for an object in the last place it was hidden instead of a new place.
It is only after the sensations of sight, sound, tactile stimulation, etc… all form within a baby’s brain and have some end result, that meaning can begin to form. Perhaps babies experience pure qualia, and then they take that qualia to create these semantic representations (ie: meaning) within their head.

What I’m suggesting is that perhaps the brain has this ‘equivalence function’ which takes qualia and is able to equate it to something such as prior experiences, or concepts which are similarly created by, and memories about, qualia.

I have one other line of reasoning. If qualia are the innate building blocks as I’m suggesting, and if these building blocks somehow equate to the total experience to produce meaning through this ‘equivalence function’ in the brain, then perhaps there are people who, because of genetic/biological differences in their brain wiring, have it backwards. Perhaps for these unique individuals the equivalence function takes the meaning and produces qualia instead.
1. Qualia Experiences => equivalence function => meaning (normal people)
2. Meaning => equivalence function => qualia experiences (unique individuals)

And in fact, this might be the case. For people with synesthesia, perhaps the brain creates meaning from qualia, but also creates qualia from meaning as suggested by #2 above. From an article in Nature:
(note: C. stands for the subject with synesthesia on whom experiments of a devious nature were performed)
"C.'s (the subject with synesthesia) large difference in congruent/incongruent reaction times (236 ms) indicates that automatic photisms were induced by the arithmetic solution (for example, the yellow photism associated with the digit 7 was generated in responce to calculating 5 + 2). This suggests that an external stimulus (for example, a physically present numeral 7) is not required to trigger a photism. Rather, activating the concept of a digit by mental calculation was sufficient to induce a colour experience. Thus, although C.'s photisms are both consistent and automatic, they do not require a physical stimulus to elicit them."
Article attached. Note that a photism is the qualia (ex: the experience of yellow) had by the subject.

About this article has been written:
Other studies have demonstrated that synesthetic perception occurs involuntarily and interferes with ordinary perception. And last summer, University of Waterloo researchers Mike Dixon, PhD, Daniel Smilek, Cera Cudahy and Philip Merikle, PhD, showed that, for one synesthete, the color experiences associated with digits could be induced even if the digits themselves were never presented. These researchers presented a synesthete with simple arithmetic problems such as "5 + 2." Their experiment showed that solving this arithmetic problem activated the concept of 7, leading their synesthete to perceive the color associated with 7.

This finding, published last July in the journal Nature (Vol. 406), was, according to Dixon, the first objective evidence that synesthetic experiences could be elicited by activating only the concepts of digits. As such, these results suggest that, at least for this synesthete, the color experiences were associated with the digit's meaning, not just its form.
Ref: http://www.apa.org/monitor/mar01/synesthesia.html

I had a very long discussion with one synesthate to discuss this. She swore up/down/sideways that her photisms were elicited only by the letterforms. I tried to trick her once by asking what kind of photism she experienced when viewing DIV, but she was too quick for me. She simply said DIV is means ‘idiot’ in her culture (England) so she didn’t see anything (she experiences photisms with numbers only). I pointed out that DIV is the number 504 in Roman numerals and she said she simply didn’t read Roman… <argh> But I wonder if it’s the letterform as she insisted or the meaning of a number which forms the photism as suggested by Dixon!

To summarize, I’m suggesting that perhaps meaning is created in the mind by an ‘equivalence function’ of sorts. I’m suggesting that the brain takes raw qualia (ie: such as the perception of colors or grey shades, spatial relationships between these colors in the visual field, auditory perceptions, tactile, olefactory and other qualia, etc.) such as a baby might experience, and using an equivalence function of sorts, the brain then determines, or learns, how the overall experience of all these different qualia inter-relate. This equivalence function is what creates meaning out of raw qualia in the brain. Meaning then, might be thought of as an assemblage of raw qualia that represents something. That representation can also be used by the brain to predict future or past events, for example – by creating mental imagery and using that to predict past/future events. I could give some examples but it’s getting late.

Have you ever heard of any theory like this? Does it seem unreasonable?
 

Attachments

Last edited by a moderator:
  • #18
Hi Q_Goest. Sorry I've been away. I enjoyed your post.

That's an interesting case study on synesthesia. The idea I have about synesthetes is that they don't lose the primary meaning, but have also a piggyback meaning that goes along - and one which can cross sensory modalities.

I'm curious about this equivalence function and how it is different from associative learning (simply connecting paired sensory events). You'll have to help me along because I am not strong in philosophy. I'm thinking in terms of behavioral psychology and Hebb's law - what fires together, wires together. I guess I'm thinking that you have to have these paired (or multiple) events for meaning to arise and that meaning could not sprout out of pure qualia without association.

The idea of meaning influencing qualia suggests a sort of transfer, or taking new information and fitting it into an existing "structure". Please let me know if I misunderstood. But it seems that a structure has to be in place first for any qualia to be influenced by it, and wouldn't that structure have to be built first from simple associations of qualia? Maybe this is a chicken and egg thing. I'll take the stance of saying there's no meaning out there in the world; it's purely constructed by the observer.

Another thing I am curious about is how the psychologists' term "sensation" differs from the philosophers' term "qualia". I took a class on "sensation and perception" and was told that sensation was pure experience devoid of meaning. When any meaning is attached it them is classified as a perception. Are sensation and qualia the same thing?
 
  • #19
The idea I have about synesthetes is that they don't lose the primary meaning, but have also a piggyback meaning that goes along - and one which can cross sensory modalities.

The present theory on synesthesia as I understand it is essentially there is a ‘bleed over’ from one portion of the brain into another. From a computational perpsective, there might be improper links between neurons in two sections of the brain, one of which experiences color and another. However, the idea that there is some kind of “piggyback meaning” is a new one as far as I know. For example:
Synesthesia raises all sorts of big questions about how we create an internal representation of the world, but scientists have to break these down into smaller and more manageable ones. For example, is the synesthete's colour response to a number 5, say, triggered by the sight of 5 on the page or just the idea of 5? [is a photism triggered by the letterform or meaning?] Earlier this year, the neurologist Vilyanur Ramachandran of the University of California, San Diego, came up with a test for synesthesia which seemed to suggest it was the sight of the number that was important.

You look at a page made up of specially drawn 2s and 5s that are a mirror image of each other. The 5s are placed at random but the 2s form shapes such as circles or triangles. To a normal person they just look like a jumble, but to asynesthete the patterns made from 2s leap out as a different colour from the 5s. "This shows they were really sensing colour," says Ramachandran. "Concepts don't group."

But then last summer psychologists at Waterloo University came up with evidence that what really matters is the concept of a number.

Smilek and his colleagues gave a synesthete some simple mental arithmetic to do while looking at different coloured sheets. They found that when the colour of the sheet clashed with the colour of the answer, her response was slower than when it was the same. An actual colour could interfere with the colour of a number that existed only in her head.

"Our research suggests that colour experiences coincide with the processing of meaning," says Smilek. "It's the concept of a number that's coloured."

So which is it? At present, we don't know, …
Ref: http://mutuslab.cs.uwindsor.ca/schurko/misc/synesthesia_article_finanical_times.htm

The person I was talking to with synesthesia (I’ll call Alex) has a PhD in a field related to psychology and has practiced for 20 years. Per Alex, the photism occurs due to the letterform as suggested by Vilyanur Ramachandran. As evidence, when asked if ‘five hundred and four’ is written out, the photism happens to be the same as the number but Alex claims this is only coincidental. However, if the number is spoken, the photism is identical to the written number. So I think we really can’t be sure if the photism corresponds to a meaning or just the letterform. Note that Alex insists it's only the letterform.

Another thing I am curious about is how the psychologists' term "sensation" differs from the philosophers' term "qualia". I took a class on "sensation and perception" and was told that sensation was pure experience devoid of meaning. When any meaning is attached it them is classified as a perception. Are sensation and qualia the same thing?

Thanks for that. That’s a real nugget! <lol, not being facetious. That’s good insider terminology. ty> I like the concept of “sensation” as opposed to qualia. That’s at least a very good step in the right direction of defining what I mean by qualia. If we say sensation in the sense we have 5 senses corresponding to the first 5 “experiences” listed by Chalmers. They are 1. sight, 2. sound, 3. touch, 4. smell, & 5. taste.

But what about desire? Lust? Anger? Hatred? Or any other of the myriad of ‘sensations’ we experience which are complete fabrications of the brain, just as these sensations are? What are these? In the list Chalmers provides, #8 is “other bodily sensations” which he lists as an experience and myself as a qualia. To be honest, I’m open at this point as to what these are. Are they pure qualia? Are they meaning? Are they primitive emotions? I don’t know how to relate them to sensations and meaning just yet.

I'm curious about this equivalence function and how it is different from associative learning (simply connecting paired sensory events). You'll have to help me along because I am not strong in philosophy. I'm thinking in terms of behavioral psychology and Hebb's law - what fires together, wires together. I guess I'm thinking that you have to have these paired (or multiple) events for meaning to arise and that meaning could not sprout out of pure qualia without association.

I guess you’ll have to help me along a bit too as I’ve got no formal education in psychology (or philosphy either for that matter).

To better understand what I’m after, I’ll tell you a story about a colloquium I went to recently. The speaker, a fine young man with a French accent was discussing “concepts” as they apply to cognitive science. At the end of his lecture I introduced myself and background of 20 years as an engineer, explaining that our views no doubt differed simply from the background (cultural) issue. Then pointed out that as an engineer, I see things as being made of various materials that have various relationships with each other and interact in some way. How these various different parts relate to each other and interact is simply a fundamental (perhaps cultural) view I hold of the natural world as an engineer.

I then pointed out that concepts are in the mind such that if there was no one in the room, there couldn't be any concepts in there either. We got a laugh out of that. As you say, “I'll take the stance of saying there's no meaning out there in the world; it's purely constructed by the observer.” Point being, concepts are supervenient* on the functions and physical attributes of the mind, just as a car’s motion is a function of the various parts and pieces from which it is made. But such things as concepts (or meaning) are also not objectively measurable but only subjectively accessible. I then popped the same basic question to him, "What are concepts made out of?" The best responce he could give was, "Other concepts."

Ok, I wasn't expecting him to actually be able to present a theory of how concepts (or any mental representation) are possible within the mind. I do think however, that there have to be basic building blocks (subjective ones, not objective ones) out of which such things as concepts, meaning, or mental representations are created. Coincidentally, those subjective building blocks must correlate to physical states in the mind. What I want to understand is what these building blocks are, if there are any. And how they interrelate to create what we know as mental representations, meaning, or concepts.

Bit of a tangent now… I find the whole issue of there being nothing intrinsic to nature which can support meaning/concepts/mental representations (such as the present paradigm of mind, computationalism requires) rather repulsive. As another person at the colloquium pointed out in responce to my question, we are really at the "fire, wind, water" point at understanding the mind. We've come no farther along in understanding the mind than the ancients did who tried to explain matter in terms of fire, wind and water.

I believe I can prove that ‘sensations’ (1 thorugh 5) are intrinsic to physics, but let’s not go there. Let’s just make that an axiom for the purposes of this thread. (right or wrong, it doesn’t matter) I think other sensations such as “other bodily sensations” (#8) are probably also intrinsic to physics but I don’t know they aren’t simply ‘meaning’. What’s the difference?

If meaning is made up of things intrinsic to physics (ex: sensations) then perhaps meaning is explainable in terms of some kind of ‘equivalence function’**. Note the difference: qualia are intrinsic, but meaning might not be. Meaning might only be a function of something else that is intrinsic.

Now you pointed out something interesting I’ve not heard about. You equated this concept of ‘equivalence function’ to “associative learning (simply connecting paired sensory events)”. I had to Google that, but yes, I think there’s a strong correlation between the concept of associative learning and equivalence function. However, I don’t suspect psychologists really look at the mind in the same way I look at the interactions in nature – that’s mostly a cultural issue. Does associative learning explain how the brain/mind can create mental representations, meaning, or concepts from sensations (qualia) alone? I suspect not, though you came close here: “sensation was pure experience devoid of meaning. When any meaning is attached it them is classified as a perception.” I don’t believe there’s a really good explanation of how meaning arises in the brain, an explanation which is as well developed as why the planets orbit the sun. I guess the point of this thread is to explore the possibilities of how the mind creates meaning.

*Note: supervenient in this case means that concepts, thoughts, qualia, etc… are dependant on the physical brain and there is a physical relationship or state that the brain is in which corresponds to an experience. So there must be a 1 to 1 correlation between the physical brain and the seemingly non-physical experience which is had by that brain.

** The term “equivalence function” is my own, not a philosophical term. It is meant only to suggest there is a rigerous, perhaps mathematical function or relationship used by the brain to take a group of sensations or qualia, create a unified experience of those sensations, and then equate that unified experience to a meaning. This thread is meant to explore how meaning might arise in the brain.
 
Last edited:
  • #20
I've been thinking about what it means to have a "concept". Is it a unique function of the human cerebral cortex?

We were talking about associative learning, and I started thinking about simple organisms that are receptive to this, shown with Pavlovian "classical conditioning". For instance, aplysia, a simple sea slug can be conditioned to pair a mild stimulus, like touching its gill, with a severe one, like electric shock, and "learn" that touching is associated with danger (or at least consistently respond to it as it typically does to dangerous shock). Can we say that this organism has developed a concept? It seems more like Searle's Chinese Room argument - processing without meaning.

Some primates, on the other hand, can learn to sign for things that they want, and some African Grey parrots can even use simple grammatical structure. Can we say that they have concepts?

Do you think that associative learning is necessary (but not sufficient) for an equivalence function? I'm interested in hearing your thoughts.
 
  • #21
Pardon if I'm intruding, but as someone with no formal training in real cognitive stuff, though a bit of experience in computer simulation and lots of experience in being an armchair philosopher, I thought I'd take a crack at coming up with a distinction between a concept and a response to stimulus.

It seems to me that much of human thinking involves developing an internal mental model of some reference phenomenon in the external world. Then, when we're trying to predict the outcome of interacting with that phenomenon, we examine the mental model of it.

Smelling the aroma of food and consequently anticipating how it will taste if you put it in your mouth seems like a simple stimulus-response behavior to me. Whereas smelling smoke and connecting it to your model of how fire works and using that to predict what your next action should be (if you're a stone-age hunter, maybe it's a forest fire and you need to gather your possessions and flee; if you're a modern person sitting and home and not cooking you really need to find the source of the smell and obviate a potentially dangerous fire getting out of control; if you're a kid at summer camp, it's time to break out the marshmallows, chocolate, and graham crackers) seems like a more sophisticated cognitive endeavor that we could refer to as having a concept.

So it seems to me that if an animal does the same sort of thing and actively constructs and uses a mental model we could say that animal has a concept. Like if a cat watches a human opening a door and says to itself (probably not so verbosely, of course) "that is a thing which, if manipulated properly, allows me to get to the other side of it" then intentionally goes on to add to the model through experimentation the proper method of manipulating the doorknob to open it, we could say that cat has the concept of a door.
 
  • #22
the distinction between a concept and a response to stimulus is that responses only require that one know 'how' to act. concepts require that you know 'what' you are doing. the next level would be 'why'.

computers know 'how' to respond to events. humans know 'what' those responses are doing. humans may or may not know 'why' they are doing it.
 
  • #23
I think those terms might be too vague going forward as computers become more complex. In many cases computers have very sophisticated models that they use predictively, it's just that they don't actively construct those models in an autonomous fashion (yet). So couldn't you say that a computer has the "why"?

(And also, it seems to me that in the example of a cat and the door I gave, or of a stone age man responding to a forest fire, it's not actually necessary that they know "what" or "why" but they still have the concept of the door and of fire.)
 
  • #24
the concept IS the 'what'. and cats do know 'what' they are doing.

in no way at all did i mean to imply that computers couldn't someday know 'what' they are doing. they just don't yet.

your statement 'So couldn't you say that a computer has the "why"?' does not seem to me to follow from the previous statement 'In many cases computers have very sophisticated models that they use predictively, it's just that they don't actively construct those models in an autonomous fashion (yet)'. I see no connection.
 
Last edited:
  • #25
Oh, that's just based on my proposal that a concept is a mental model that is actively constructed and added to. But that's just my idea.
 
  • #26
I have no doubt that the brain does what you say but that is just 'how' it does 'what' it does. if you asked it what it was doing how would it respond? what's the simulation for that.

perhaps you are thinking, as I originally did, that 'how' represents a subroutine, 'what' represents a goal, and 'why' represents a higher goal. that is not the case. its more subtle that that.
 
  • #27
Math Is Hard said:
I've been thinking about what it means to have a "concept". Is it a unique function of the human cerebral cortex?

We were talking about associative learning, and I started thinking about simple organisms that are receptive to this, shown with Pavlovian "classical conditioning". For instance, aplysia, a simple sea slug can be conditioned to pair a mild stimulus, like touching its gill, with a severe one, like electric shock, and "learn" that touching is associated with danger (or at least consistently respond to it as it typically does to dangerous shock). Can we say that this organism has developed a concept? It seems more like Searle's Chinese Room argument - processing without meaning.
There are a wide variety of views on this. Various people have proposed “single cell” theories of consciousness, primarily Edwards and Sevush. Others suggest that consciousness in some rudimentary form can be a phenomenon of single cell organisms. See for example, Hameroff and Margulis.
For example if you believe that animals are conscious, you have to ask "If your dog is conscious, how about a worm? How about a paramecium?. How low do you go?" A position taken by the biologist Lynn Margulis is that all cells are conscious, and that even protozoa and bacteria have a simple consciousness.

Single-cell organisms like paramecium are very interesting. They swim around gracefully to seek food, avoid predators, find mates and have a kind of rudimentary sex. Yet these single cell paramecia have no synapses or neurons. They do what they do by virtue of their microtubules. The little cilia that stick out and act like sensory organs and paddles or oars, are structures made up of microtubules and are organized by internal microtubules. So, in the case of the paramecium, the cytoskeleton and microtubules are the cell's nervous system.
Ref: http://www.quantumconsciousness.org/interviews/alternative.html

The reason cognitive science generally dismisses these ideas is that, the paradigm we work to is that… it is the interaction of the neurons which create this “emergent” phenomenon of consciousness.

But there is no logical reason that I’m aware of that says you MUST have numerous interacting neurons to support consciousness. There is no philosophical or logical distinction that can be made between the computations which take place between the vast number of neurons and the computations which take place within the bounds of a single cell. To a large extent, I would agree with Edwards, Sevush, Margulis, and possibly Hameroff.
Math Is Hard said:
Some primates, on the other hand, can learn to sign for things that they want, and some African Grey parrots can even use simple grammatical structure. Can we say that they have concepts?

Do you think that associative learning is necessary (but not sufficient) for an equivalence function? I'm interested in hearing your thoughts.
Getting back to your point here… yes, I think some kind of associative learning is necessary for this ‘equivalence function’. I think we are associating mental (also subjective or phenomenal) experiences as listed by Chalmers, and we’re using these experiences to make this predictive model of reality mentioned by Capt Quasar, which to me is what mental representation is all about.
CaptainQuasar said:
It seems to me that much of human thinking involves developing an internal mental model of some reference phenomenon in the external world. Then, when we're trying to predict the outcome of interacting with that phenomenon, we examine the mental model of it.
However, stimulus and the reaction to it, doesn’t require any mental experience to occur. We can’t know if the bacteria are actually experiencing anything. What granpa points out is valid:
granpa said:
the distinction between a concept and a response to stimulus is that responses only require that one know 'how' to act. concepts require that you know 'what' you are doing. the next level would be 'why'.

computers know 'how' to respond to events. humans know 'what' those responses are doing. humans may or may not know 'why' they are doing it.
But I’d go even further and point out that stimulus is something a light bulb does when an electric current passes through. Stimulus can be viewed as any reaction to a causal influence. Any physical interaction in nature can be viewed as being a stimulus, which is exactly what computationalism is all about. Computationalism simply states that it is the sum total of all these causal interactions which occur at the classical mechanical level, which creates this emergent phenomena of consciousness. Just like so many dominoes falling over, the reaction a person has to a given stimulus is believed to be due only to these physical interactions between neurons. Mental representation however, isn’t needed to explain why some physical interaction occurs. The physical interaction occurs because at some local level, there is matter and energy which interacts without any meaning whatsoever.*

Meaning, or any mental representation, has to be more than just stimulus. http://users.ecs.soton.ac.uk/harnad/Papers/Harnad/harnad90.sgproblem.html" points out those computational interactions as just pointed out, are just ‘symbol manipulation’. We have what he points out is a “symbol grounding problem”. The interaction of matter never necessitates that meaning accompany that interaction. So stimulus alone isn’t sufficient to give rise to meaning. This problem is also pointed out nicely by Searle’s “Chinese room” and Ned Block’s “China Brain” thought experiments. You can have interaction between parts, but there is no reason to suggest, and no criteria given by the computationalist paradigm, to distinguish between action/reaction and meaningful reaction.

*The many problems of mental phenomena influencing physical phenomena are nicely summerized by Jaegwon Kim in his book, "Mind in a Physical World".

~~~

Here’s what I’d suggest as a starting point for defining meaning and mental representation:

Mental representation is created by an equivalence function which associates phenomenal qualities to the external world. Meaning (and concepts) uses mental representation as building blocks.

PS: Thanks Capt and granpa for the additional comments and thoughts.
 
Last edited by a moderator:
  • #28
Q_Goest said:
However, stimulus and the reaction to it, doesn’t require any mental experience to occur. We can’t know if the bacteria are actually experiencing anything.

This seems to me to be a topic beyond the realm of empirical analysis. We don't have any way of knowing that other humans are experiencing anything, do we? Much less bacteria. They could be something like a very complex robot that behaves in a similar manner to ourselves without anything having experiences. (Not saying that they'd be mechatronic, but something like a biological machine with no consciousness.)

That's why it seems to me that a definition of "concept" that brings it within a scope where animals or computers could be demonstrated to have them would be more useful, rather than basing it on unproveable things like experience or consciousness.
 
Last edited:
  • #29
since we are the ones that build and program the computers why wouldn't we know whether they are programmed to know 'what' they are doing?

I see no reason to think that a clever psychologist couldn't figure out a test to determine whether an animal knows 'what' it is doing or not. (as though it weren't obvious). we will someday be able to look at the wiring of the animals brain and determine whether it does or not. all well within empirical science.
 
  • #30
CaptainQuasar said:
This seems to me to be a topic beyond the realm of empirical analysis. We don't have any way of knowing that other humans are experiencing anything, do we? Much less bacteria. They could be something like a very complex robot that behaves in a similar manner to ourselves without anything having experiences. (Not saying that they'd be mechatronic, but something like a biological machine with no consciousness.)

That's why it seems to me that a definition of "concept" that brings it within a scope where animals or computers could be demonstrated to have them would be more useful, rather than basing it on unproveable things like experience or consciousness.

I see no point in trying to ignore phenomenal consciousness. Certainly, many have. Psychology is basically a science of behavior which might be closer to what you may want to discuss. But that's not the point of this thread, and phenomenal consciousness won't go away simply by ignoring it.

If you don't think the problem of phenomenal consciousness is real, then ask yourself if any of the experiences you have (qualia) can be mathematically deduced. How do you calculate for example, the experience of a color? I would argue that there is no mathematical correlation to an experience which can be had, even in principal. The reason is fairly simple and obvious, the sensation of color is not something empirically measurable, just as you've eluded to by noting that such a topic is "beyond the realm of empirical analysis". And it's because of this that people like Chalmers and Kim consider themselves duelists.
 
  • #31
Thinking about this a bit more… Our sensations of the world consist of input from our 5 senses, 1. sight, 2. sound, 3. touch, 4. smell, & 5. taste. Individually, these senses are actually made up of numerous individual sense from specific receptors. For example, the eye has rods and cones, perhaps millions of these, each of which send signals into the brain, either directly or indirectly by combining signals locally in the eye. So even for a single experience of sight, we find this sensation is made up of an enormous number of individual inputs from the individual rods and cones of the eye.

The next step is for the brain to take all these sensory inputs, and create a unified experience from them. This is no small feat. All of these sensations from different sensory organs, all contribute to a single, unified experience. How the brain combines all these individual sensations into a seamless whole isn’t really understood, so this issue has been given the name, “the binding problem”.

But the binding problem may actually skip a step. We may hear something at the same time we see something, at the same time we feel something. The resulting unified experience however, also includes “#8. Other bodily sensations”. I’d propose that these other bodily sensations arise when we associate a meaning with the first 5 sensory inputs. Take for example, the sensation of hearing a roar, seeing a lion, and feeling claws cut into your skin. The resulting sensations elicit the “other bodily sensation” of fear and panic. These other bodily sensations also contribute to the unified experience we have of the world, but bodily sensations often (always?) exist because of the sensory input which generate some kind of meaning.

In the case of the lion, our mind/brain associates various sensory inputs to a mental model of the world, and is able to predict an outcome. That predicted outcome generates the further bodily sensations of fear and panic. So already, with this unified experience, we have an equivalence function of sorts in the brain which takes the sensory input and equates it to something, producing a bodily sensation. And this bodily sensation is bound up into the entire experience of the world.

This experience of the world then, is more than a mental representation. We might imagine a mental representation of a lion attacking us without any bodily sensation of fear or panic. I don’t think there’s anything inconsistent about that.

Let’s define the mental representation that we have as part of our unified experience of the world as that experience created by the five senses. For now, let’s say it is devoid of other bodily sensations. Certainly, the unified experience we have of the world includes other bodily sensations. But I don’t think we need to associate these bodily sensations with the mental representation of the world.

So if we break up the experience we have of the world into a mental representation of the world (which arises through the unification of our senses) and the meaning of this mental representation, then that would seem to point to the unified experience containing meaning only after we associate other bodily sensations to this unified experience. The ‘equivalence function’ I keep after then, is this equating of the mental representation to other bodily sensations which provides meaning.

How does that sound so far? Please feel free to pick it apart. We can talk about “concepts” and other higher mental experiences later.
 
  • #32
I didn't say anything like "the problem of consciousness isn't real". I said that there's no point in objecting that bacteria don't have consciousness if we don't even know that other humans have consciousness.
 
  • #33
granpa said:
since we are the ones that build and program the computers why wouldn't we know whether they are programmed to know 'what' they are doing?

I see no reason to think that a clever psychologist couldn't figure out a test to determine whether an animal knows 'what' it is doing or not. (as though it weren't obvious). we will someday be able to look at the wiring of the animals brain and determine whether it does or not. all well within empirical science.

To put it in your terminology, I wasn't saying that computers and animals do not know the "what", I question whether humans know the "why" any better than a computer or animal could, or if it's just that we usually have more complex "what"'s.
 
  • #34
CaptainQuasar said:
I didn't say anything like "the problem of consciousness isn't real". I said that there's no point in objecting that bacteria don't have consciousness if we don't even know that other humans have consciousness.

oh - ok. lol

I assume you know what a p-zombie is? That's a hypothetical person with no consciousness. Someone who just acts as if they're aware of everything, but with no more experience than a rock. Although such a person is logically possible, I disagree they are naturally possible. If the phenomenon of consciousness is supervenient on my physical brain, then we would be hard pressed to suppose that the phenomenon doesn't also exist for very similar physical substrates.

To put it in your terminology, I wasn't saying that computers and animals do not know the "what", I question whether humans know the "why" any better than a computer or animal could, or if it's just that we usually have more complex "what"'s.
Here, it sounds as if you may be mixing up the concept of phenomenal consciousness and behavior. You're using the terms "what" and "why" in the colloquial way.
 
  • #35
Q_Goest said:
I assume you know what a p-zombie is? That's a hypothetical person with no consciousness. Someone who just acts as if they're aware of everything, but with no more experience than a rock. Although such a person is logically possible, I disagree they are naturally possible. If the phenomenon of consciousness is supervenient on my physical brain, then we would be hard pressed to suppose that the phenomenon doesn't also exist for very similar physical substrates.

Yes, I know what a p-zombie is, I just wouldn't use obscure effete terms like that on the basis that I might sound like a foppish dandy. :biggrin: And not to mention, other participants in the conversation might be unfamiliar with them.

This is exactly what I meant about not being empirical, if you're going to simply assume that you know the presence of consciousness is due to a physical substrate present in humans but not in lower animals.

Q_Goest said:
Here, it sounds as if you may be mixing up the concept of phenomenal consciousness and behavior. You're using the terms "what" and "why" in the colloquial way.

I'm not confusing them, I said I thought a definition of "concept" ought to be something that could be examined empirically. But go ahead, school me in what granpa meant by those terms.
 
  • #36
zombie? I get the idea but I'm not entirely sure what your point was so forgive me if I am off. but consider this:

atoms arent conscious. so if I construct a human being out of atoms is the result a zombie without consciousness?
 
  • #37
CaptainQuasar said:
To put it in your terminology, I wasn't saying that computers and animals do not know the "what", I question whether humans know the "why" any better than a computer or animal could, or if it's just that we usually have more complex "what"'s.


computers know 'how' to do things
animals know 'what' they are doing
humans know 'why' they are doing it.

a young human who doesn't yet know 'why' is in a sense still animal-like. a talking animal thanks to our language module (read 'the language instinct').

if a computer learned to know 'what' it was doing then it would be an animal.
af an animal learned to know 'why' it was doing 'what' it was doing than it would be human. and it would be able to speak.

that is my opinion, FWIW.
 
Last edited:
  • #38
granpa said:
zombie? I get the idea but I'm not entirely sure what your point was so forgive me if I am off. but consider this:

atoms arent conscious. so if I construct a human being out of atoms is the result a zombie without consciousness?

Yeah, that's a very good point. It definitely seems like one of the first questions that the discussion of consciousness leads to.

The consciousness stuff wasn't directed at you at all, I was responding to Q_Goest's points that brought it into the discussion of concepts. I don't think it's really relevant, myself: I think an animal or a computer could have a concept without having consciousness, by actively constructing the mental models that I think are related to concepts.

What I was saying in the "what" versus "why" is, if the "what" is a mental model used for predictive purposes, what really distinguishes the "why" from it? It seems to me that the "why" is simply an attempt to make an extended and more complicated mental model. An attempt that appears to sometimes be successful, as in the case of much of science, and sometimes appears to be unsuccessful, as in the cases of when people attribute causes to supernatural forces.

Might cats not have some notion similar to supernatural forces out at the ends of the loose threads of their mental models? It seems to me that they might, and that this might qualify as a "why" in the possession of an animal.
 
  • #39
Rather than go off course here, I’d be interested in your thoughts on post #31 where I try to define meaning. I don’t mind going off on a tangent for a bit to help explain some basic concepts, but would really appreciate if the focus of the thread remained on the idea of meaning and semantic representation.
CaptainQuasar said:
Yes, I know what a p-zombie is, I just wouldn't use obscure effete terms like that on the basis that I might sound like a foppish dandy. :biggrin: And not to mention, other participants in the conversation might be unfamiliar with them.
I agree, and I wrestle with the same issue. When does the professional terminology get in the way of understanding an issue? However, as difficult as it is sometimes to understand, the terms are intended to aid in conveying a concept, so I try to define them when I don’t think they will be easily understood.

CaptainQuasar said:
This is exactly what I meant about not being empirical, if you're going to simply assume that you know the presence of consciousness is due to a physical substrate present in humans but not in lower animals.
There’s something called the “supervenience thesis” which I think we should take as an axiom for the purposes of this thread. That thesis simply states there is a physical substrate that supports the phenomenon of consciousness, and any similar physical substrate should therefore also support consciousness. I don’t mind arguing this point, but let’s do that in another thread if you don’t mind.

CaptainQuasar said:
I'm not confusing them, I said I thought a definition of "concept" ought to be something that could be examined empirically. But go ahead, school me in what granpa meant by those terms.
When you say, “.. I wasn't saying that computers and animals do not know the "what", I question whether humans know the "why" any better than a computer or animal could, or if it's just that we usually have more complex "what"'s.”
This might be misconstrued to imply that both computers and animals have some kind experience when they undergo physical changes in state. Computationalism recognizes there are physical changes of state and it’s the function of those changes (ie: functionalism) which gives rise to the phenomenon.

Computationalism does not say that ALL computers, or any computational physical system for that matter, will posess this phenomenon of consciousness. Note also that computationalism doesn’t say that computers are performing mathematical computations. That’s not what computers do – they don’t do math. They are only interpretable as doing math by a conscious person. Computationalism says that the interactions within some physical system can be simulated or symbolized in some way by using mathematics. There’s a big difference here. Again, if there’s some misunderstanding about this, perhaps we can start a new thread on what computationalism really means.

What I’d be very interested in is your thoughts on post #31. Any help there would be appreciated.
 
  • #40
we know 'what' a mammal is. its an animal that has fur, warm blood, live young, legs directly below the body, and many other characteristics. but 'why' is a mammal 'what' a mmmal is? why are they so different from reptiles?

this is where the difference is.
 
  • #41
you want to know the meaning of meaning?

what is the meaning of 'mammal'?
 
Last edited:
  • #42
granpa said:
atoms arent conscious.



I wouldn't go all the way to claiming this was true. Your consciousness resides in your brain which is made up of atoms. We have no way currently to verify your claim. It seems logical from your POV, but there is a chance that it can be wrong. This "emergent property" thing signifies nothing, it's an empty label made to fill up the great void in our understanding of consciousness. IMO there are 2 options that explain consciousness - you either believe in the thing that's not allowed to be talked about on science forums, or you believe in elementary particles that have a mind of their own and are able to construct a universe and wonderful beings. Pure uncaused randomness leading to energy turning into a universe that existed for 14 000 0000 0000 years governed by a set of laws of physics of a very unknown origin, that could harbour conscious life, is utter nonsense IMO.
 
Last edited:
  • #43
granpa said:
we know 'what' a mammal is. its an animal that has fur, warm blood, live young, legs directly below the body, and many other characteristics. but 'why' is a mammal 'what' a mmmal is? why are they so different from reptiles?

this is where the difference is.

Isn't the reason that a mammal is so different from a reptile because "mammal" and "reptile" are categories that were intentionally created by scientists to sort things with different characteristics into? That one, it seems to me, is definitely tied to semantics, and the "why" of the specific words would be tied into the linguistic history of English. But as for a more fundamental "why" if that's what you're asking, human models for why a variety of animals exist have ranged from the action of some creator god to modern science of evolution.

But is a concept that involves, say, a bear looking at a squirrel and being able to recognize "that thing probably came from the trees" or looking at a bird and thinking "that thing probably came from the sky" - is a human having the same concept plus an origin story involving either gods or scientifically-defined processes really materially different from what the bear thinks?
 
  • #44
CaptainQuasar said:
Isn't the reason that a mammal is so different from a reptile because "mammal" and "reptile" are categories that were intentionally created by scientists to sort things with different characteristics into?


one does not follow from the other. if categorise are random creations of peoples minds then one would expect that characteristics would be random. one would expect a continuum of different animals.
 
  • #45
Responding to #31:

Q_Goest said:
This experience of the world then, is more than a mental representation. We might imagine a mental representation of a lion attacking us without any bodily sensation of fear or panic. I don’t think there’s anything inconsistent about that.

In my framing this would be the mental model of a lion being used to either predict an occurrence involving either you yourself, or predict an occurrence not involving yourself. I would call fear or panic additional mental processes that serve to deal with anticipated occurrences involving yourself.

Q_Goest said:
So if we break up the experience we have of the world into a mental representation of the world (which arises through the unification of our senses) and the meaning of this mental representation, then that would seem to point to the unified experience containing meaning only after we associate other bodily sensations to this unified experience.

This doesn't follow from the other things you've said, in my opinion. Mental representations of the world don't simply arise from a unification of our senses. Someone who is blind or deaf, for example, can have a very similar mental representation of the world, granting the same predictive capabilities, compared to someone who has all senses functional.

Lots of things feed into these mental representations that aren't simply sensory information - "communication", which I put in quotes because I'm talking about pieces of mental models that might come from other people, or something mentally symbolized via input from a computer or other inanimate object like a divination or augury - pigeon guts for instance, or even from a communication error - you might develop an idea, an addition to an existing mental representation, because you mis-heard something someone said.

Other things that would be building blocks of mental representations which don't derive from the senses would be things like logic or mathematics. And of course, as you yourself mention, things like panic or fear that appear to be the product of special mental processes or brain structures.
 
Last edited:
  • #46
granpa said:
one does not follow from the other. if categorise are random creations of peoples minds then one would expect that characteristics would be random. one would expect a continuum of different animals.

If you believe in evolution or at least the fossil evidence it's partially derived from, there is a continuum of different animals, they just aren't all alive today. And there is certainly a continuum of characteristics within a particular species and there are organisms that don't fit into existing scientific categories. In botany, for example, just a few decades ago they had to tear everything apart and re-categorize it based upon genetic information and evolutionary theory.
 
  • #47
instead of asking what is 'meaning' maybe we should ask what is 'information'?
 
  • #48
Yes, good question. Perhaps another is, "is information a form of communication?" I don't know the answer to that.
 
  • #49
CaptainQuasar said:
If you believe in evolution or at least the fossil evidence it's partially derived from, there is a continuum of different animals, they just aren't all alive today. And there is certainly a continuum of characteristics within a particular species and there are organisms that don't fit into existing scientific categories. In botany, for example, just a few decades ago they had to tear everything apart and re-categorize it based upon genetic information and evolutionary theory.


of course there WAS a continuum but there isn't today. why? obviously many died. why? we arent talking about traits within a species. we are talking about the difference between different species.

as for recategorizing according to genetics, what if you had a car that was a ford and another almost identical one that was a chevy. are they not both cars? the fact that they came from 2 different sources is irrelevant. what about a green car and a blue car. are they not both cars? so classifying animals according to genetics may be useful for biologists but it isn't really a proper classification.
 
Last edited:
  • #50
granpa said:
instead of asking what is 'meaning' maybe we should ask what is 'information'?


The nature of our existence? The ability to find meaning in the sequence of elementary particles arranged in certain ways? How else would we make sense in a world ruled by QFT?
Music is a good representation of how we perceive information and reality. We are able to extract patterns of sound waves that make sense to us out of noise(all the possible sound waves).
 
Back
Top