Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Are semantic representations innate?

  1. Sep 26, 2008 #1

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    In his book, "Representation and Reality" Hilary Putnam writes about Chomsky:
    So what do you think? Are there ‘semantic representations’ in the mind that are innate and universal? Or would you go along with Putnam? If they are not innate/universal, then how do you think meaning gets created in the mind?
     
  2. jcsd
  3. Sep 27, 2008 #2

    Fra

    User Avatar

    Interesting question.

    I haven't read that book, nor do I focus on the human brain but in despite of that this question tangets the quest for unification of interactions in physics, in particular when associating interaction ~ communication.

    How does two interacting parts, learn to "understand each other"?

    I don't find the question applied only to the human level, to be clear enough to worth commenting on, but if one generalizes the question to ask if there is an universal semantics that is the basis for how parts of the universe (here I picture the entire range of observers elementary particles, molecules, cells, simple organisms, and of course humans included) learn to "understand" each other?

    My personal take on that is the evolutionary perspective, where systems that has evolved, have been selected for the capability to communicate. Those who fail, are not favoured.

    I think some basic semantic could be something as simple as good and bad, "I like" or "I don't like" Basically a boolean state. But from the point of view of physical interactions I think of this in terms of "constructive" and "desctructive", referring to the self.

    Meaning that I personally think the semantics is first somewhat subjective and system dependent, but still the classification of the semantics may be "semi-universal" in the sense that the evolution implied by the mutual communication, implies a "selection" for mutual understanding and thus an emergent universal semantics.

    /Fredrik
     
  4. Sep 27, 2008 #3
    is this related to Pinkers idea of an inborn language module?
     
  5. Sep 27, 2008 #4

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Yes, I see your tangent. One could however simply suggest that things such as molecules, cells and simple organisms don’t NEED to actually “understand” each other to interact in some predetermined way. For humans however, we ‘understand’ each other if the meaning in our heads corresponds to a shared meaning “by the criteria used by a good interpreter” so to speak. In contrast, we wouldn’t generally speak of elementary particles, through simple organisms understanding each other. If there is no ‘meaning’ in the mind of a particle, there certainly can’t be any semantic representation.

    But again, why would simply being able to communicate (in the sense of being able to interact) require understanding? If we go back into evolutionary history, and suggest the ability to communicate (in the sense that communication is shared meaning) is favoured, then don’t you need to claim that such single cell organisms have this thing we call ‘meaning’? I’d rather steer away from this line of thought as the focus here is ‘semantic representation’ which is to say there is some kind of meaningful representation (of something real or imaginary) in a person’s mind.

    This is a great point! I believe you’re saying that the “I like” or the “I don’t like” is an example of a very basic building block used to create ‘meaning’ or a semantic representation. I don’t disagree, though I might also add that even this ‘building block’ can be further broken down into semantic representations such as “I” which has a meaning of self, and “like” could mean various things such as “constructive” as you say, but also “favorable” or something similar. Chomsky might say such very basic building blocks are primitive (ie: of a fundamental nature) or even that such building blocks are innate. So I think what Chomsky is saying then is that ALL semantic representations are innate. However, one would then need to explain other semantic representations as being innate, such as “bath tub” which one might point out, could mean something which can hold water and is large enough for a human body to lie in. I believe Searle actually used this as an example. Regardless, the meaning (or semantic representation, if you will) of “bathtub” might also be looked at as being socially or culturally situated. For example, ask someone who’s never heard of a bathtub before what such a thing is, and in order to explain what it is, we will need not just some basic primitives (such as “water” and “capable of holding” and “human body”) but we might also need to explain that such things as bathtubs hold social meanings such as a respect for cleanliness, making it difficult or impossible to reduce such semantic representations to innate primitives.

    So are you saying that all semantic representations are innate? Sounds like you’re suggesting that in order for mutual communication to occur, there must be universal semantics.
     
  6. Sep 27, 2008 #5

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    I guess what I’m driving at here, is ‘how is meaning created in our heads’? If we try and reduce everything to some primitive semantic representation, then where do these primitives come from? And what are they? Certainly, they don’t seem to appear in any physics I know of. There is no ‘meaning’ no semantic representation that can be pointed at in nature, any more than we can point to emotions such as love or hate. We pick up a book and read a story, and it all has meaning inside our heads. The letters on the page have no meaning except when being read by a person with a mind into which the meaning of the words somehow arise in the hardware of the brain. Nevertheless, the meaning of the words is still colored by our own experiences, our own social and cultural background.
     
  7. Sep 27, 2008 #6

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    I'm not familiar enough with Pinkers. Do you have a good link?
     
  8. Sep 27, 2008 #7
    the closest thing an nature to meaning would be information. so what is the difference between meaning and information?
     
  9. Sep 27, 2008 #8
    just look up 'the language instinct'

    computers know 'how' to process information but they dont know 'what' they are doing. for information to have meaning you must know 'what' you are doing. 'why' is perhaps optional.
     
  10. Sep 27, 2008 #9

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Information is simply the symbolic representation of something, such as the distance from here to there might be 3 miles. Information is available and exists without someone having to be aware of it, just as there is information in a book or in a computer.

    Meaning is the semantic representation of something in the mind, such as what we have in our heads when we say it is 3 miles (ex: I can't get there it time, it's 3 miles away! (or) It'll only be a minute, we're only 3 miles away.) In each case, we have something in our mind that represents 3 miles, and it is generally contingent on other things.
     
  11. Sep 27, 2008 #10

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Thanks. I'll take a look.

    Right. Notice that when you say "computers know 'how' to process information" you are using a colloquialism. The computer doesn't actually know anything. It isn't even calculating numbers! It isn't doing anything other than having a bunch of interactions which can be interpretable by a human as processing information or calculating numbers. Just as you imply, it is the interpretation of what the computer does inside a person's mind that has meaning.

    I totally agree "why" is optional.
     
  12. Sep 28, 2008 #11

    Fra

    User Avatar

    I'm going to be unable to access the internet for most of the rest of the week so I want to avoid entering into detailed discussions so this is some quick response only.

    I am not sure I understand your abstraction here. It seems to me you are somehow refering to the "utility" of understanding? Is there an utility to a particle of understanding it's environment? In my view of things there is. The very existence of this "particle" is in my opinion, possibly, a manifestation of success (survival).

    Yes that's how I mean. I see your point about about further decomposition, but I consider the "I" implicit, and thus redundant (I could have left it out) We could just start with "like" or "not like" - the "I" is implicitly the one expressing this (this is the observer, from my point of natural philosophy)

    If we consider the "like" stuff is self-reinforcing, the "not like" is self-destructive. Thus we are more likely to see mutual "like" understanding evolve, than mutual conflicts, since the conflicts are self-destructive.

    I guess your original intention was to keep the discussion at the level of "human philosophy", and I am seeing this from the perspective of "natural philosopy", and my way of relating to this is that humans is certainly part of nature.

    To clarify what I mean, I do not believe in universal semantics in the fundamental sense, but I do think that there is a selection for a common semantics in any local environment. This suggests that the communication are expected to evolve (spontaneously) to a state of mutual understanding (defined as all parties "like it"). This is a correspondence of equilibrium.

    I think of it so that, one subject, can only probe the semantics of another subject in one way, and that is interacting with it, and then build up expectations of what the consequences or feedback is. The execptations that whos feedback is "I like" is preserved :) If not, the expectation is deformed. A lifeform that survives in an environment can be thought of as "liking the environment" and "beeing liked by the environment". Understanding might be a communication level where one has "like"-"like".

    I can also see this as a kind of consistency from the point of view of logic.

    "like" ~ consistency
    "not like" ~ inconsistecy

    An equilibrium conditions requires consistency. Inconsistency suggest that the current situation is unstable, and there would be a selection for fluctuations in the direction of increasing consistency.

    I would say this: I don't know if there is universal semantics (which means exactly what it says; it does not mean that there isn't one, it means I don't _know_ there is one). However, I also know that all my actions are based upon what I do know. Let's call this point of view and behaviour somewhat "rational". Then, my best bet is that all other things in my environment is also behaving somewhat rational. Again this means what is says(it does not mean my environment IS rational, it means that I ACT (as an act of gambling) as if they are likely rational). Then the result of this is probably emergent consistency.

    The abstraction here, are I think successful to a wide number of cases.

    But as some practical levle, I do think that actual humans (here I do not consider general cases, like particles) do have some "for all practicla purposes" innate semantics that boils to some level of self-preservation, "like" or "not like", "consistency" or "inconsistency" from which more complex semantics, and behaviour can be built.

    /Fredrik
     
  13. Oct 12, 2008 #12

    Math Is Hard

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    There are a couple of studies that I know of that suggest that our language influences our mental representations. In the Guugu Yimithirr language (of some Australian aboriginals), spatial information is encoded linguistically in absolute terms, such as East and West, and not in terms relative to the body, such as left and right, which is common in Indo-European languages. A researcher named Levinson (1997) wanted to know if the linguistic markers carried over into non-linguistic cognitive tasks. Two groups (Dutch speakers and Guugu Yimithirr speakers) were asked to view a linear arrangement of three objects on a table. The participants were then taken into a second room and seated at a table facing the opposite direction. They were asked to arrange an identical set of objects just as they had been in the first room. Most Dutch speakers arranged them from left to right, relative to the body, but the Guugu Yimithirr speakers arranged them in absolute positions, where they had been with respect to East and West.

    There was another study on object similarity done between Yucatec Mayan speakers and English speakers. In English, concrete nouns tend to carry information about object shape (candle suggests something long and thin) but in Yucatec Mayan, the emphasis is on the material the object is made of (candle suggests something made of wax) and the shape of the object has to be added as a modifier. A researcher named Lucy (1992) wanted to know if these differences would carry over into non-linguistic similarity judgments. Lucy presented the participants with a "pivot object" (such as a U-shaped object made of clay) and two objects that the participants had to choose as being more similar to the pivot. One matched in material (for instance, a clay S shaped object) and one matched in shape (for instance, a plastic U shaped object). Almost all English speakers chose based on shape, and almost all Yucatec speakers chose based on material.
     
    Last edited: Oct 12, 2008
  14. Oct 12, 2008 #13

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Hi Fra,
    I'm sorry, but you confused me, and I don't know what to say... :frown:

    Hi Math,
    Interesting line of thought. Similarly, studies done on Piraha in the Amazon indicated they had no conceptual meaning for numbers greater than about 3.

    Does this mean that our language influences our mental representation? Or does this say that social and cultural customs shapes the mental representation directly and the language simply follows suit like a shadow (or the back legs of a dog as he climbs the stairs <lol>)?

    Take the Guugu Yimithirr language as an example. If we found a subset of English speaking individuals who always thought in terms of north/south/east/west instead of right and left, and we found this subset difference was due to a common cultural background for this particular group within the English speaking countries, then wouldn’t that indicate that the common culture and not the language affected ‘meaning’? Truck drivers from my experience, always talk in terms of north/south/east/west as opposed to right/left. This for the obvious reasons.

    For the case of Yucatec Mayan, is it the language that emphasizes material over structure or is it the culture?

    For the case of the Piraha, I seem to remember that some of their children were found to be able to pick up mathematics and larger counting numbers with ease, whereas the elders had a difficult time at it. I guess I see this as another instance of the culture being the real driver for semantic representation as opposed to the language.

    I’d be interested in your thoughts.

    But to take this one step further (and to narrow down where I wanted to go with this thread) it seems to me that language isn’t necessary for meaning. I have to believe animals that have no language can still have some kind of semantic representation within their mind - some kind of meaningful representation in their mind that corresponds to the world they experience. So the more fundamental question I’m trying to find an answer to is, is there a scientific or philosophical theory of how meaning is formed within the mind? Something that goes beyond a language used to create meaning. Since all languages are inevitably created by humans to describe their experiences, and inevitably (in my view) languages are built around the experiences of a given culture, all languages inevitably reflect the experiences of that culture. But meaning is more than just language. There must be something more fundamental which gives rise to meaning within the human brain than the language itself. Doesn’t meaning have to exist before a language can be created to describe it?
     
  15. Oct 15, 2008 #14

    Math Is Hard

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Thanks for sharing that. I thought I had heard of these people but I was thinking of a society with a similar system, the Gidjingarli (Australian aboriginals, I believe). In their language, they have only four number terms: one, two, one-two, and two-two. They also have a term that means "big mob". I suppose they use as many numbers as they need. It seems odd, though, that they would not develop more precision in the system for trading, even if it is amongst themselves and not with other tribes.
    I've been reading about a fellow named Vygotsky who is considered the grandfather of sociocultural theories of development. His big idea was that social interaction was the causal mechanism for all the higher psychological processes (as he described, the ones that separate us from animals). At first I thought that was quite boring – of course children learn from social interaction with other people! But he was also saying that our behavior and cognitive development were shaped by the "cultural tools" of our society. He describes these as technical tools (like plows and forks, etc) and also "psychological tools" that help us with our thinking (like language, maps, calculators, calendars, abacuses, mathematical symbol systems, etc.) His idea was that we internalize these psychological tools. For instance, children who are trained to do math on abacuses make abacus-type errors when doing the problems in their heads because they have integrated the abacus as a mental construct.

    His view was that language was a cultural tool, and the most important psychological tool – and back to what you said, this tool should develop to suit the needs of the culture as certainly as a society of soup eaters would develop spoons as technical tools rather than forks.

    As far as truck drivers, that would make such an interesting study! I tend to think that left and right force of habit would override the way of thinking they use on the job (since they learned this system later and only use it in particular contexts), but who knows. I think at the very least you might see a difference in reaction time if they have any interference between strategies.

    For the Yucatec Mayan people, it seems to be an open question as to whether that have a special value of an object's material. Presumably they do (or did at some point), and this is why it showed up in the language and why it comes up in similarity judgments. On the other hand, they could ask why shape is such a big deal to English speakers for classification. In any case, it doesn't seem to be something that is explicitly taught. I think that someone who is learning Yucatec as a foreign language would probably get these explicit rules during instruction, but kids just pick it up, and develop their mental concepts around it.

    With kids, it could have a lot to do with opportunity and biology. I think the quick pick-up has a lot to do with the overabundance of neural synapses they possess relative to the adults. When infants and children are developing, they have these rapid periods of synaptogenesis that are followed by long pruning periods. The number of connections in a toddler's brain are far greater than those in an adult brain. Some people argue that this is why young children can pick up foreign languages more easily than adult learners. The wiring is all there ready and waiting, whereas with adults, that wiring might have gotten dedicated to other functions, or the connections simply "died off" due to disuse and have to be reformed. There also appear to be these "sensitive periods" in which stimulation has to be there in order for a specific capability to be acquired. An infant raised in a dark cave for the first year, for example, is never going to develop normal vision.

    Jean Piaget thought it all started with reflexes. When babies come into the world they have a few basic reflexes like sucking and grasping. Piaget thought the first month of life was devoted to simple reflex modification. They modify their sucking to get more milk, for example. Simple stimulus and response. As they mature, it gets more interesting – they accidentally raise a hand to the mouth (maybe it flies up there due to a startle reflex) and hmmm.. this is something to suck on.. it's very satisfying while waiting for mom. As they mature more, they start to notice effects of their actions on the world – ex. kicking the side of the crib makes that jingly toy perched on top make more jingles - interesting. It begins rudimentary cause and effect learning.

    Piaget thought that infants were incapable of actual mental representations until they were about 8 months old. Here's why: he noticed that if he showed his 7 month old an object he would reach for it (he did a lot of experiments on his own kids) but if he covered it up, the baby would stop looking. He reasoned that this means that for the baby, the object ceased to exist. (The infant lacked "object permanence"). This was because he thought the infant could not yet form mental representations – out of sight, out of mind. At 8 months, kids will begin to search for the object – but they make a lot of errors, such as searching for an object in the last place it was hidden instead of a new place.

    But there have been a lot of experiments since Piaget's time that suggest infants as young as 4 months old do make mental representations of objects and that infants as young as 3 ½ months have understanding of basic physical laws and possibilities. These were conducted by the "core knowledge" theorists (beginning in the 1990s) who suggest that infants come into the world with certain basic ideas and domain specific knowledge (expectations) that they have inherited through natural selection.

    I can talk more about this in another post if you want, but I fear I am becoming boring (and I am sleepy). I think that all the good developmental theorists have models for how the external world becomes internalized as concepts in a child. But certainly, I think most agree that meaning exists before language. Even a pre-verbal child's pointing at what it wants suggests that.
     
    Last edited: Oct 15, 2008
  16. Oct 15, 2008 #15
    Isn't this the Sapir–Whorf hypothesis?
     
  17. Oct 15, 2008 #16

    Math Is Hard

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Yes. Both studies I mentioned are supportive of the linguistic relativity hypothesis.
     
  18. Oct 16, 2008 #17

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Hi math. Excellent write up. Thanks. I have no doubt we’re both on the same page, I agree with everything you’ve said. In fact, the discussion on child development is a perfect lead in.

    I’m changing gears and getting away from the linguistic side of meaning now! Please have a glass of wine and think of this as a late night discussion at a pub as opposed to a debate or well thought through presentation… it ain’t that. If it’s not a fun discussion, I’ve failed. :tongue:

    Children (babies) can have some kind of meaning in their head. A pain in the tummy might mean hunger and the need for milk, or the sensation of chafing might mean poop and they need their diaper changed. To resolve the need, they can cry which means that mommy will come to help. The use of the term ‘mean’ here is some type of thought in their mind which indicates the experience had by the baby corresponding to something, such as a concept, if in fact babies are capable of concepts. Certainly I can grant a baby the ability to form a concept to some degree; commesurate with their experience level. This isn’t to say the baby understands what milk is or poop is, but there is an experience of something, the experience is generated by qualia or sensations of the world, and that experience, formed by the qualia, has meaning.

    Note that I’m using the term qualia here in a more strict sense than some. People sometimes use the term qualia to mean experience or any part of an experience. I’d like to use the more specific meaning which are those specific experiences obtained from the various individual bodily senses. More in a moment.

    Back to the baby… which came first for the 4 month old, the meaning/concept or the qualia based experience? I don’t think it’s too radical to suggest that qualia came first. Perhaps Chomsky might suggest that semantic representations are innate (ie: meanings are innate to the human mind) and thus the concept or meaning was available for discovery prior to the experience of it, but I think that’s a bit off. I would however conceed that there must be something innate in the human mind to allow meanings to form.

    What about qualia – perhaps qualia are the innate building blocks that are used to form meaning in the mind? Chalmers in his book “The Conscious Mind”, lists various “experiences” which he catagorizes. This list is his, with my thoughts in paren’s.
    1. Visual experiences (qualia of seeing)
    2. Auditory experiences (qualia of hearing)
    3. Tactile experiences (qualia of touching)
    4. Olfactory experiences (qualia of smelling)
    5. Taste experiences (combined qualia of smelling and sensors on the tounge including sweet, sour, bitter, temperature, texture, etc)
    6. Hot and Cold (qualia from skin nerve cells)
    7. Pain (different qualia from skin nerve cells)
    8. Other bodily sensations (other qualia… won’t go there! lol)
    9. Mental imagery (formed by all prior listed qualia. Has meaning)
    10. Conscious thought (formed by all prior listed qualia. Has meaning)
    11. Emotions (emotions are qualia but also have meaning)
    12. The sense of self (feelings such as this are qualia and can have meaning)

    This list isn’t intended to be all-encompassing, but at least it is representative of the various types of experiences that can be had, per Chalmers.

    One through eight above are, or can be, defined as what I’d call ‘pure qualia’. They don’t need to be ‘experiences’ in the sense of having meaning by themselves. You might imagine the experience of seeing red for example, but that experience not having any meaning. Similarly, you might hear a high pitched whining noise (qualia) without having meaning apart from the experience of the noise. Similarly, all experiences 1 though 8 can be had by a 4 month old baby for example, before these qualia can be related to some kind of meaning (such as the experience of a fire truck with it’s siren going as it speeds by is created by the experience of red and the high pitched whining noise along with some structural representation forming a mental image that has meaning).

    For 9 through 12, these are generally constructs within the human mind which use qualia but they also equate to something. Mental imagery (such as the memory of a past event or combined experience of all qualia creating a present event) generally posses meaning, as does #10, conscious thought. Emotions such as ‘fear’ or ‘excitement’ are sensations of qualia that have meaning, and perhaps these emotions are primative and evolved prior to any linguistic abilities.

    Would it be reasonable to suggest that meaning is created by qualia, and it is the qualia which are the fundamental building blocks of ‘meaning’?

    Your discussion about babies is a perfect example I think of how babies experience ‘pure’ qualia, without the ability to relate it to meaning. Take for example, what you say here:
    It is only after the sensations of sight, sound, tactile stimulation, etc… all form within a baby’s brain and have some end result, that meaning can begin to form. Perhaps babies experience pure qualia, and then they take that qualia to create these semantic representations (ie: meaning) within their head.

    What I’m suggesting is that perhaps the brain has this ‘equivalence function’ which takes qualia and is able to equate it to something such as prior experiences, or concepts which are similarly created by, and memories about, qualia.

    I have one other line of reasoning. If qualia are the innate building blocks as I’m suggesting, and if these building blocks somehow equate to the total experience to produce meaning through this ‘equivalence function’ in the brain, then perhaps there are people who, because of genetic/biological differences in their brain wiring, have it backwards. Perhaps for these unique individuals the equivalence function takes the meaning and produces qualia instead.
    1. Qualia Experiences => equivalence function => meaning (normal people)
    2. Meaning => equivalence function => qualia experiences (unique individuals)

    And in fact, this might be the case. For people with synesthesia, perhaps the brain creates meaning from qualia, but also creates qualia from meaning as suggested by #2 above. From an article in Nature:
    (note: C. stands for the subject with synesthesia on whom experiments of a devious nature were performed)
    Article attached. Note that a photism is the qualia (ex: the experience of yellow) had by the subject.

    About this article has been written:
    Ref: http://www.apa.org/monitor/mar01/synesthesia.html

    I had a very long discussion with one synesthate to discuss this. She swore up/down/sideways that her photisms were elicited only by the letterforms. I tried to trick her once by asking what kind of photism she experienced when viewing DIV, but she was too quick for me. She simply said DIV is means ‘idiot’ in her culture (England) so she didn’t see anything (she experiences photisms with numbers only). I pointed out that DIV is the number 504 in Roman numerals and she said she simply didn’t read Roman… <argh> But I wonder if it’s the letterform as she insisted or the meaning of a number which forms the photism as suggested by Dixon!

    To summarize, I’m suggesting that perhaps meaning is created in the mind by an ‘equivalence function’ of sorts. I’m suggesting that the brain takes raw qualia (ie: such as the perception of colors or grey shades, spacial relationships between these colors in the visual field, auditory perceptions, tactile, olefactory and other qualia, etc.) such as a baby might experience, and using an equivalence function of sorts, the brain then determines, or learns, how the overall experience of all these different qualia inter-relate. This equivalence function is what creates meaning out of raw qualia in the brain. Meaning then, might be thought of as an assemblage of raw qualia that represents something. That representation can also be used by the brain to predict future or past events, for example – by creating mental imagery and using that to predict past/future events. I could give some examples but it’s getting late.

    Have you ever heard of any theory like this? Does it seem unreasonable?
     

    Attached Files:

  19. Oct 25, 2008 #18

    Math Is Hard

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Hi Q_Goest. Sorry I've been away. I enjoyed your post.

    That's an interesting case study on synesthesia. The idea I have about synesthetes is that they don't lose the primary meaning, but have also a piggyback meaning that goes along - and one which can cross sensory modalities.

    I'm curious about this equivalence function and how it is different from associative learning (simply connecting paired sensory events). You'll have to help me along because I am not strong in philosophy. I'm thinking in terms of behavioral psychology and Hebb's law - what fires together, wires together. I guess I'm thinking that you have to have these paired (or multiple) events for meaning to arise and that meaning could not sprout out of pure qualia without association.

    The idea of meaning influencing qualia suggests a sort of transfer, or taking new information and fitting it into an existing "structure". Please let me know if I misunderstood. But it seems that a structure has to be in place first for any qualia to be influenced by it, and wouldn't that structure have to be built first from simple associations of qualia? Maybe this is a chicken and egg thing. I'll take the stance of saying there's no meaning out there in the world; it's purely constructed by the observer.

    Another thing I am curious about is how the psychologists' term "sensation" differs from the philosophers' term "qualia". I took a class on "sensation and perception" and was told that sensation was pure experience devoid of meaning. When any meaning is attached it them is classified as a perception. Are sensation and qualia the same thing?
     
  20. Oct 26, 2008 #19

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    The present theory on synesthesia as I understand it is essentially there is a ‘bleed over’ from one portion of the brain into another. From a computational perpsective, there might be improper links between neurons in two sections of the brain, one of which experiences color and another. However, the idea that there is some kind of “piggyback meaning” is a new one as far as I know. For example:
    Ref: http://mutuslab.cs.uwindsor.ca/schurko/misc/synesthesia_article_finanical_times.htm

    The person I was talking to with synesthesia (I’ll call Alex) has a PhD in a field related to psychology and has practiced for 20 years. Per Alex, the photism occurs due to the letterform as suggested by Vilyanur Ramachandran. As evidence, when asked if ‘five hundred and four’ is written out, the photism happens to be the same as the number but Alex claims this is only coincidental. However, if the number is spoken, the photism is identical to the written number. So I think we really can’t be sure if the photism corresponds to a meaning or just the letterform. Note that Alex insists it's only the letterform.

    Thanks for that. That’s a real nugget! <lol, not being facetious. That’s good insider terminology. ty> I like the concept of “sensation” as opposed to qualia. That’s at least a very good step in the right direction of defining what I mean by qualia. If we say sensation in the sense we have 5 senses corresponding to the first 5 “experiences” listed by Chalmers. They are 1. sight, 2. sound, 3. touch, 4. smell, & 5. taste.

    But what about desire? Lust? Anger? Hatred? Or any other of the myriad of ‘sensations’ we experience which are complete fabrications of the brain, just as these sensations are? What are these? In the list Chalmers provides, #8 is “other bodily sensations” which he lists as an experience and myself as a qualia. To be honest, I’m open at this point as to what these are. Are they pure qualia? Are they meaning? Are they primitive emotions? I don’t know how to relate them to sensations and meaning just yet.

    I guess you’ll have to help me along a bit too as I’ve got no formal education in psychology (or philosphy either for that matter).

    To better understand what I’m after, I’ll tell you a story about a colloquium I went to recently. The speaker, a fine young man with a French accent was discussing “concepts” as they apply to cognitive science. At the end of his lecture I introduced myself and background of 20 years as an engineer, explaining that our views no doubt differed simply from the background (cultural) issue. Then pointed out that as an engineer, I see things as being made of various materials that have various relationships with each other and interact in some way. How these various different parts relate to each other and interact is simply a fundamental (perhaps cultural) view I hold of the natural world as an engineer.

    I then pointed out that concepts are in the mind such that if there was no one in the room, there couldn't be any concepts in there either. We got a laugh out of that. As you say, “I'll take the stance of saying there's no meaning out there in the world; it's purely constructed by the observer.” Point being, concepts are supervenient* on the functions and physical attributes of the mind, just as a car’s motion is a function of the various parts and pieces from which it is made. But such things as concepts (or meaning) are also not objectively measurable but only subjectively accessible. I then popped the same basic question to him, "What are concepts made out of?" The best responce he could give was, "Other concepts."

    Ok, I wasn't expecting him to actually be able to present a theory of how concepts (or any mental representation) are possible within the mind. I do think however, that there have to be basic building blocks (subjective ones, not objective ones) out of which such things as concepts, meaning, or mental representations are created. Coincidentally, those subjective building blocks must correlate to physical states in the mind. What I want to understand is what these building blocks are, if there are any. And how they interrelate to create what we know as mental representations, meaning, or concepts.

    Bit of a tangent now… I find the whole issue of there being nothing intrinsic to nature which can support meaning/concepts/mental representations (such as the present paradigm of mind, computationalism requires) rather repulsive. As another person at the colloquium pointed out in responce to my question, we are really at the "fire, wind, water" point at understanding the mind. We've come no farther along in understanding the mind than the ancients did who tried to explain matter in terms of fire, wind and water.

    I believe I can prove that ‘sensations’ (1 thorugh 5) are intrinsic to physics, but let’s not go there. Let’s just make that an axiom for the purposes of this thread. (right or wrong, it doesn’t matter) I think other sensations such as “other bodily sensations” (#8) are probably also intrinsic to physics but I don’t know they aren’t simply ‘meaning’. What’s the difference?

    If meaning is made up of things intrinsic to physics (ex: sensations) then perhaps meaning is explainable in terms of some kind of ‘equivalence function’**. Note the difference: qualia are intrinsic, but meaning might not be. Meaning might only be a function of something else that is intrinsic.

    Now you pointed out something interesting I’ve not heard about. You equated this concept of ‘equivalence function’ to “associative learning (simply connecting paired sensory events)”. I had to Google that, but yes, I think there’s a strong correlation between the concept of associative learning and equivalence function. However, I don’t suspect psychologists really look at the mind in the same way I look at the interactions in nature – that’s mostly a cultural issue. Does associative learning explain how the brain/mind can create mental representations, meaning, or concepts from sensations (qualia) alone? I suspect not, though you came close here: “sensation was pure experience devoid of meaning. When any meaning is attached it them is classified as a perception.” I don’t believe there’s a really good explanation of how meaning arises in the brain, an explanation which is as well developed as why the planets orbit the sun. I guess the point of this thread is to explore the possibilities of how the mind creates meaning.

    *Note: supervenient in this case means that concepts, thoughts, qualia, etc… are dependant on the physical brain and there is a physical relationship or state that the brain is in which corresponds to an experience. So there must be a 1 to 1 correlation between the physical brain and the seemingly non-physical experience which is had by that brain.

    ** The term “equivalence function” is my own, not a philosophical term. It is meant only to suggest there is a rigerous, perhaps mathematical function or relationship used by the brain to take a group of sensations or qualia, create a unified experience of those sensations, and then equate that unified experience to a meaning. This thread is meant to explore how meaning might arise in the brain.
     
    Last edited: Oct 27, 2008
  21. Nov 13, 2008 #20

    Math Is Hard

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I've been thinking about what it means to have a "concept". Is it a unique function of the human cerebral cortex?

    We were talking about associative learning, and I started thinking about simple organisms that are receptive to this, shown with Pavlovian "classical conditioning". For instance, aplysia, a simple sea slug can be conditioned to pair a mild stimulus, like touching its gill, with a severe one, like electric shock, and "learn" that touching is associated with danger (or at least consistently respond to it as it typically does to dangerous shock). Can we say that this organism has developed a concept? It seems more like Searle's Chinese Room argument - processing without meaning.

    Some primates, on the other hand, can learn to sign for things that they want, and some African Grey parrots can even use simple grammatical structure. Can we say that they have concepts?

    Do you think that associative learning is necessary (but not sufficient) for an equivalence function? I'm interested in hearing your thoughts.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Are semantic representations innate?
  1. Is intellect innate? (Replies: 3)

Loading...