Originally posted by Dark Wing
this is what i would like to say. The essence of consiousness lies in physics. why? it is through these building blocks that biology is formed. consious experience however, can ONLY be reached through biology as far as we know. biology alows us to respond and interact with our world.
But response and interaction with our world are not sufficient conditions for consciousness. See my previous post.
What would be a nice case to mention here - someone mentioned to me that violet electric light has its own consious. why? they claimed that they could make this electric light do what they wanted it to do: ie, go against its own normal behaviour pattens, and go and seek out all cancerious cells in the body (for instance) Does anyone know anythin about this? this might be a point case example of somthing that is not biology but is conscious?
I find it curious that you entertain this as an example of some nonbiological system that might be conscious, when we could just as well "make something do what we want to do," ie go against its own normal behavior patterns, and go and seek out some particular type of object in the environment, by building a suitable robot. But it seems to me that you refuse to give a robot as much consideration as a candidate for consciousness as you would to a bundle of photons.
concieving that there could be a place inwhich the brain does not contain consiousness as a metaphysical question, i still don't see how that is relevant to the study of this contingent world. Sure, maybe it is the case elsewhere. conception of somthing is not a good reason to consider it. if we wanted to accept everything we concieved as an arument for the demise of a previous thought, then we would be back into superstition, not science.
Again, the conceivability argument is just another way of reflecting how consciousness is epistemologically and ontologically distinct from 'ordinary' physical phenomena. Simply put, we cannot rationally imagine a world physically identical to ours where H2O molecules do not combine to form water, but we can rationally imagine a world physically identical to ours where the neurons of a human brain do not combine to form consciousness.
The reason this is relevant is that it illustrates a fundamental difference in the way we understand and can explain consciousness vis a vis classical physical objects, and this in turn has ontological consequences-- it tells us something about how the world must actually be. Once we accept the axioms of materialism, we can show that H2O molecules form water
by logical necessity, but we cannot show an analogous logically necessary link between the physical world as we understand it and consciousness, even in principle. This suggests that the model of the world put forth by materialism is insufficient to account for consciousness. If materialism/physicalism/mechanism were sufficient to explain consciousness, then we should be able to produce an argument showing how consciousness follows from their assumptions by logical necessity. If we cannot theoretically derive consciousness from these theoretical models of reality even in principle, this suggests that if the world really were as these models of reality state it is, then consciousness would not exist. But, of course, consciousness does exist. So these models must be fundamentally inadequate depictions of the world, as they have nothing meaningful to say about consciousness.
For a detailed discussion of the explanatory gap, please see the paper http://www.uni-bielefeld.de/(en)/philosophie/personen/beckermann/broad_ew.pdf, by Ansgar Beckermann. It is a bit of a lengthy read (14 pages), but perhaps after reading it you will come to a fuller appreciation for why the explanatory gap cannot be so easily shaken off. (This paper includes a refutation of the notion that simply equating qualitative properties with physical processes makes for a successful reductive/physical explanation.)
this is just it. we are not mapping consious states to brain staes. or vice versa. there is not seperation, and therefore no mapping can happen. it is case of "oh, look, we stimulate this neuron, and check out that laughter" it not mapping, its understanding. there is no explanatary gap, if you conceieve that biology is our link to our world, and therefore the reason we even have a consious experience. what consiousness IS is an entirly diffrent question: but we can expalain the experience.
From the 3rd person view, there is no problem: we excite some neurons, we observe laughter. There is a clear causal connection. But that is not the heart of the matter. The heart of the matter is traversing the gap from the 3rd person view to the 1st person view. In your example, we can observe the person's laughter, but we cannot observe his qualitative sense of comedy. We can explain his laughter as observed from the 3rd person view via a functional explanation: the activation of certain neurons leads to the activation of other neurons, and eventually motor neurons are activated which fully account for the characteristic motor behaviors of spastic breathing and smiling facial expression. But this functional 3rd person explanation cannot explain why the person subjectively experienced humor from his 1st person view.
The 3rd person view involves the straightforward causal connection from one structural/functional system (the brain) to other structural/functional systems (respiratory system, facial musculature, etc.) The 3rd-to-1st person view involves a causal connection from a structural/functional system (the brain) to a intrinsic, qualitative system (consciousness). It is obvious how one structural/functional system can causally connect to another structural/functional system, but not obvious at all how a structural/functional system can causally connect to qualitative experience. Under a materialistic framework, there is no straightforward theoretical explanation, only correlation, between the two. So it is appropriate at this point to speak of 3rd-to-1st person phenomena as a "mapping" instead of merely identifying the two. The explanatory gap perseveres (again, please see Beckermann's paper).
energy and matter cannot be conditioned, nor will they change their behaviour patten (lets not go down the quantem mechanics line just yet).
Sure energy and matter can be conditioned. In fact, in principle we can explain a person's behavioral conditioning (response and interaction with his environment) entirely in terms of matter and energy-- that is, in terms of the plasticity of his neurons, and their physical adaptation and rewiring as a function of environmental inputs. Neurons that adapt as such change their net computational processing, which in turn changes one's behavioral patterns. That is a clear-cut and conceptually complete example of matter being conditioned and changing in response to its environment, and still we have no indication whatsoever of consciousness in our explanitory model.
as far as we stand in the CR argument, we are the chineese dude in the room. the one you can't tell from the analouge system. interesting? the problem of other minds? no doubt. But malcholm has a great paper about that where he basicly turns the problem around toshow that we should not worry about others being consious, but worry about if we are consious ourselves. I will find a link on the net to it, i only have it in paper here, I will dothat soon, and write a more formulated answer more offically to your reply post hypnagogue. sorry if this is a little unorganised, i am in a bit of a rush.
I look forward to reading the paper. Still, I think my critique of the CR argument stands. The philosophical thrust of the CR argument applies to human brains just as much as it does to computers, or systems of pipes, or any other physical system.
Say the Chinese Room is the brain of a Chinese person, and the person inside the CR (or CB-- Chinese Brain) is a microscopic demon who is conscious in the same way humans are, but only understands and speaks English. (A bit of a stretch as compared to the traditional CR formulation, I know, but it still serves to illustrate my point.) If the English speaking demon inside the CB does all the CB's computations for it, the demon will have the CB interpretting Chinese symbols (as encoded in the CB's auditory neurons from external stimuli) and behaviorally responding to them (speaking proper Chinese in a meaningful way with respect to the auditory stimuli), but we have no reason to think that the demon itself will understand Chinese as a result. Conceptually, there will still be synatx (physical processes) but no semantics (awareness of the significance of the syntax) for the conscious agent inside the CR.
Of course, we may suppse that although the conscious agent inside the CR/CB will not be aware of the semantics of the Chinese symbols,
the CB itself will be aware of the Chinese semantics-- it is a brain, after all, so we should suspect that it will be conscious of the information it is processing as much as we expect any other brain to be conscious of the information it processes. But to accept this position is to accept a critical flaw in the CR argument. If the CB can be aware of Chinese semantics while the English speaking demon inside it is not, then it could equally well be the case that the analogous phenomenon holds for the traditional CR argument. That is, it could be that although the English speaking person inside the Chinese Room does not understand the symbols he is manipulating, the CR taken as a system will understand the semantics.