hypnagogue said:
What if 'unga' means 'I see this color[/color]' (or if you prefer, 'there is this color[/color]')? Then clearly it can have a truth value, despite it being the only word in my language.
If your language only had that one word, could you think about other things? For instance, could you think about not-unga? And if you can think about non-unga, can you invent a word for it? If you can come up with a new word, that means you already have the concept in your mind. When I'm referring to language here, I'm referring to the totality of concepts you have in your mind, not the totality of arbitrary symbols which may or may not exist as expressions of those concepts.
What of a child who learns his first word? His father points to his mother and says "momma," and eventually the child learns to refer to his mother as "momma" himself.
The child may only know one word, but his/her head must already be full of concepts before the first word is learned. It's one thing to know that 'momma' is the sound that goes together with a particular concept; it's another thing to become aware of the concept in the first place. I'm talking about the latter, not the former.
Let me use a notation to make things easier: I will append a '+' sign whenever I'm talking about a concept a word refers to, and '-' when I'm talking about the word itself (eg: mother-, mère-, madre-, mutter-, are different words in different languages for the concept mother+)
The child knows no other words, so there are no other words for his "momma" to achieve meaning from, and yet clearly the word "momma" now has meaning for the child. How can this be if the meaning of the word "momma" is strictly contingent upon other words?
The meaning of momma- is momma+. The meaning of momma+ is contingent upon concepts such as object+, room+, person+, face+, eyes+, and so on. Even though it may take years for the child to learn the words object-, room-, person-, face-, eyes-, those concepts must be in place from a very early age.
Another scenario: before a hypothesized experimental result is determined empirically, what determines the truth value of the hypothesis?
Semantics.
When does it attain a truth value, when the experimenters observe that it has been verified (or falsified), or when the experimenters think internally/speak/write about the empirical results?
That depends. The experimenter learns something by observing the experiment, and that knowledge becomes true to him as concepts (eg: this+ causes+ that+). But concepts as such cannot be communicated, so the experimenter must choose some words in his vocabulary, and create a relationship between the words that mirror the relationship between the concepts in his mind. And here is where semantics shows up its ugly head: how can the experimenter choose words that perfectly recreate the concept "this+ causes+ that+" in the mind of everyone else?
I'm not necessarily making claims about the connections between language and reality. What I am making claims about is the connection between language and perceptual experience.
The connection may be clear for the speaker, but for the listener/reader it must be reconstructed. It's one thing to explain what momma- means by pointing your fingers at momma+. It's quite another thing to explain what "consciousness- is- an- epiphenomenon- of- the- brain-"; it's really difficult for anyone to figure out what concepts a person has in mind when uttering that sentence. However, no one is born a speaker, which means our knowledge of what words mean is always imperfect. Which means not everything we learn from other people is true, in the sense that it would be true if we had learned it from personal experience.
Suppose there is a 5 year old child, A, who has seen and can percpetually distinguish between cats and dogs, but suppose that his limited vocabulary only allows him to make the crudest of linguistic distinctions regarding what makes a dog a dog...
To cut a long story short, learning about cats+ and dogs+ is not the same thing as learning about cats- and dogs-. If you know nothing about cats- you can still think about cats+. If you know about cats- but do not know about cats+, then you may be tempted to think cats- is just another word for something you already know (such as dogs+). You may, in fact, enter into a long philosophical discussion as to whether dogs- really exist as every dog- can be shown to be a cat+ (which is of course nonsense if you know that dogs+ are not cats+)
They crack them by finding systematic relationships between the code and a natural language. But such schemes are made much easier due to the fact that symbols in a code stand for letters in an alphabet.
I'm sorry but you're wrong on this. Those forms of encryption (letter substitution) are no longer used since, as you said, they are so easy to crack. What makes cracking codes possible is that people usually know what a coded message probably means - there aren't many things one can talk about during war. But this is a side issue anyway.
Even putting that objection aside-- to borrow from your example, how would the interpreter, going only by syntax, differentiate between the words for 'left' and 'right'? Even if he manages to narrow things down enough such that he knows one word must mean 'left' and the other 'right,' how is he to differentiate between these without ultimately making some inference grounded in facts about the external world? For instance, if he finds that one word refers to the dominant hand of most people in China, he may conclude that this word means 'right,' but this inference is draw via reference to an externally existing fact about Chinese people; or he may find that one word means 'left' by roundabout reference to the direction in which the sun sets, but this again relies on an empirical fact. (e.g., if the text of some human-like alien civilization fell to Earth tomorrow, we would not know which of their hands tends to be dominant, nor would we know in which direction their sun sets, and so we could not make sense of any of these.)
Even though my example was trying to address something different, I will comment on that as it touches on the same issue. The issue is what I referred to as symmetries. There is a symmetry between 'left' and 'right' that prevents you from knowing what other people mean by it, except that if something is on the right then it can't be on the left. That's all you know about left and right; for all you know your left+ might be my right+ and we'd still agree that most people prefer to use their right- hand. So the meaning of right- is not right+, it's something else close to "not left-". But of course there's more, because there are things that are neither right- nor left-. Even so, things that are neither right- nor left- tell you very little about what right+ and left+ could possibly be.
In the end, we can only discover what right- and left- mean to the extent that we can perceive assymetries. And this has two very important consequences:
- the entirety of our perceptions cannot possibly exhibit any kind of assymetry
- as such, any description of our perceptions that implies assymetry (eg: mind vs. body) is an artificial construct
- since descriptions are made of abstract symbols, the dichotomy between the description of our perceptions and the perceptions themselves must have been introduced by the symbols, not by our perceptions themselves
I'm not sure exactly how language, as expressed by symbols, creates this false dichotomy, but I'm sure that it does. The reason I'm so sure is because there is no dichotomy between any aspect of my perceptions and the entirety of them; in other words, I never experience anything that I believe I should not be experiencing. Clearly it is our theories that must be wrong, not our perceptions.