Yeah. Well, check out the first paragraphs of:
Perhaps I should talk about "general semantics" to cause less confusion, but I keep falling back to just "semantics" because language is so clearly just an instance of semantical understanding. Language is semantical because our understanding is semantical. And yes, just being conscious of holding a coin in your hand or seeing an apple in your visual view is a case of semantical understanding. I'll try to break it down and I hope I can demonstrate why it is important for a physicist to understand this.
Sensory data causes spatial/temporal patterns to occur inside the cortex. This is where different objects are being recognized. But what does it mean to recognize an apple? Does there exist a metaphysical "apple neuron" whose firing means an apple has been perceived? Of course not. You must have first assumed there exists such things as "apples" in the world. The brain must form a worldview, which is assumptions about "what exists" and assumptions about their behaviour.
How does this learning happen then? I could say the brain classifies objects by their properties, but that would already be wrong, in that it would imply there really exists metaphysical objects that can be classified. Oh no, the brain only finds stable patterns, and assumes there are objects. We do not point at an arbitrary portion of a wall and call it an object, we classify reality into, let's say "sensible objects" (Like what rewebster said about labeling things in biology).
All this labeling and classification of the sensory data occurs because that is the only way to find "things" that exhibit persistent behaviour, and when we find such things, we can perform predictions. The brain can basically simulate reality. It perceives some situation, it recognizes objects (rolling rock, gushing water, fog in the wind...), and it can form some idea about what is going to happen. (Rock coming down on you will harm you, water might too, but fog won't, unless if it might be nerve gas, etc...)
At no point of this learning we had any actual "knowledge" about reality in our disposal. How could we have ever learned that there exists "ground" when we did not know what any of the sensory data means at all, when we did not know what is "visual" data or what is "audio" and what is "tactile"? Well, at root, the worldview is self-supported circle of truth. All ontological arguments are circular. All views of reality are a set of self-supporting assumptions. Not only that, all things we could ever perceive and form any semantical ideas of, were patterns that were distinguished by their differences from each others. Air could not be perceived if everything was air.
There must also be an idea of "ground" so to give any meaning to "air".
This is why all our conscious understanding is semantical. Nothing has any metaphysical meaning to itself, and nothing we are conscious of is "reality". It is merely something we have a semantical idea of, and the idea itself exists only on the virtue of other ideas giving it some meaning. We do not even have any visual experience without a semantical worldview against which to interpret the data.
This is imperative to understand, so let's still imagine a world where everything was completely red. If everything was red, we would have no comprehension about colours at all. The red world would not look like "red" looks to us currently. We would not be conscious of any "red" things. The way we experience red now is arbitrary. Reality does not look by its colour the way we see it; there just exists different wave lengths
of light. We could not consciously perceive red without being able to tell what is not red; without other colours existing.
So, just being conscious of an apple moving across your visual view will require a good amount of semantical interpretation of the data to occur. When the apple moves, a corresponding pattern moves across the cortex and only at the upper levels it will be recognized as an apple, and only by the virtue of some semantical assumptions about reality it can be assumed to be the "same apple" from one moment to the next.
And because we understand reality in terms of sensible objects, we tend to make the error of identity. It is useful to assume the apple really is the same from one moment to the next, even though this is completely arbitrary assumption. We assume the apple has got metaphysical identity, and we have an experience of our "self" having an identity (which I is caused by certain semantical assumptions also). As we break reality down into smaller components, we find that things like shadows, rainbows and tornados do not really have identity; they are just stable shapes which consist of different "stuff" from one moment to the next. But we still make the same error and ask "so what are the fundamental things then?" We assumed reality was made of "water, ground, air and fire" because we saw it fit to assume things really have identity to themselves.
The answer I can give is that there never was and never will be any fundamental "things". Reality consists of self-organization and there comes to exist stable patterns which interact and form new stable patterns in emergent sense, and at some point along that road there comes to exist the stable pattern that is our semantical prediction process, which is stable in evolutionary sense BECAUSE it can actually predict reality before reality itself catches up.
Many materialists fall to this fallacy of identity as well when they posit our identity is the matter we are made of. Here we come to the mind-blowing parts of the nature of our "self".
I said objects are recognized in the cortex. Many people naively imagine that we are the cortex then; whatever happens to the cortex is what we experience. This is almost right but in some important ways not quite. The cortex is not a metaphysical object either. It does not have identity to itself. It is just stable system, where in some arbitrary sense we can say there occurs so-called "object recognition", but it is not the cortex having a conscious experience, for the cortex is a "colony of things", like an ant colony. How does a colony conceive itself as one and have just one subjective experience? Why don't every neuron have a subjective experience of "reality hitting them"? Or if they do, which one is our "self"?
Note that a single neuron does not build a worldview in any sense, it is the whole colony that does. A single neuron doesn't have a model of reality in it; the whole colony does (in some arbitrary form). And this is an important hint; the colony has, by its structure, made an assumption that there does exist such a thing as "self". This "self" is entirely a semantical token in the worldview, which consists of nothing but "assumptions". The colony conceives itself as one
There are many neurons, but only one worldview, only one interpetation process, and consequently only one subjective experience
(it is possible, and there are cases, where there basically exists multiple worldviews inside one brain, which is the same as many "minds" inside one brain, which in some cases are both active at the same time and are capable of independent attention, like in the case if Kim Peek)
Furthermore, here I claim, that for conscious experience to occur the "learning system" must make semantical assumption about there existing such a thing as "self", and consequently interpreting sensory data in semantical form of "I
see an apple".
We never were and never will be aware of reality hitting our cortex directly, for it is not reality we are conscious of, it is the semantical objects and ideas in the worldview instead!
Think about that.
But it is important to notice, as you probably have many times while reading this post, that when I use semantical concepts to describe the above, I am moving in circles, and I cannot actually touch the "true reality" of what happens. No matter in which form you imagine the worldview to really exist, you are at all times merely imagining bunch of SEMANTICAL THINGS that exist in different configurations. This can be useful, but it is not reality.
This is why I can never exhaustively convince anyone about the above being true. If what I say is true, I can never convince you of it! Ain't that a bummer... I can only say there exists overwhelming amount of indications towards the above, and it basically solves the hard problem of consciousness as good as it can ever be solved. Semantical idea of
the system that causes those semantical ideas is never true to reality of the system itself.
But I've already said enough, if you are interested of philosophical side of it, try;
(I still haven't had a chance to read this myself but I intent to, and judging from the first pages it is pretty spot on)
Or for mechanical side of prediction processes, try;
(This I have read, and what is described could well be what gives rise to such system that can produce semantics and semantical predictions the way I described)
Phew... That was long but now I hope I put everything I have stated before into rather coherent whole, and I can just refer everyone to this post when I make confusing statements about semantics elsewhere ;)