Originally posted by hypnagogue
Whoops, sorry for the late response. I went away for the weekend and forgot to check back in on the archives. It kind of worries me that we're toiling away here in the relative obscurity of the archival section in the first place instead of one of the more viable and visible new philosophy forums, but oh well. Onward!
I was wondering if you'd abandoned the discussion altogether. Glad to see you back

.
Although I pretty much agree with this, it's important to remember that it is an assumption that we're making here. In any case, there is a need to explain, since we have an explanatory gap: why are these neural processes conscious in the first place? Saying "they just are" is not much of an explanation.
I think your point about the link between consciousness and evolution also overlooks this question-- you talk about how the brain processes information, but not how certain information-crunching processes somehow become consciousness.
I see. Well, that's actually why I brought up evolution. After all, there are obviously certain reactive processes in "lesser" animals that resemble our consciousness. For example, there's the tendency to dodge "incoming objects". This has, apparently, evolved into the ability to become proactive, so that we can be "on guard" against any potential dangers of that kind. This proaction could then have evolved in an advanced perception of depth, which allows for 3-D imaging, and so on...
So, I didn't mean to dodge with the mention of evolution, but rather to refer you to the particular chapter in
Consciousness Explained that deals with that, since I find it pertinent to the discussion.
Also, as to what processes "are conscious" and "aren't conscious", I think that that can easily be determined by examining which ones are necessary to just continue living (provided no danger or new circumstance presents itself), and which ones are just for greater understanding of the surrounding world (which, of course, leads to greater chances of survival).
For example: The parts which process visiual stimulus are a part of consciousness, and so are the parts which contain memory.
To do this we'd need to have a much more sophisticated understanding of how the brain processes information. For instance, even if we accept that Multiple Draft processing really is a factor in (to use as neutral a word as I can think of) establishing consciousness, we still have not addressed the question of the importance of MD in the context of the functioning of the entire brain. IOW, is MD processing sufficient for consciousness? Is it even necessary?
I'd considered this, but, in Dennett's exposition of this theory, he didn't just present what it was, but rather presented (
many) instances where Cartesian approaches fail miserably, and then replaces them with the MD approach which (obviously) succeeds. The fact that MD approach implies the question/answer processes within the brain that I described before (and illustrated with the "party game") is what makes it (in Dennett's opinion) rather necessary for an explanation of consciousness.
We also have to take into account the role that physical properties play in the phenomenon of consciousness if we are to have a complete understanding of it in a materialistic paradigm. If we explain consciousness entirely by recourse to abstract information processing (the theory of functionalism), then we are essentially operating in an idealistic paradigm where an abacus or a pile of rocks can be conscious, so long as they perform the right 'computations' over a long enough period of time.
But that is not Idealistic, in anyway. On the contrary, it is utterly
Materialistic, since it completely eliminates "non-physical" parts of consciousness.
By this theory, any PC (for example) that performed the right physical functions would
be conscious.
A subjective mental image is not explicable or intelligible in terms of an objective computer 'image,' or to use less obfuscating language, a computer's set of photon outputs. An image on a monitor, in itself, is only a conglomeration of information. In the case of the brain, some conglomerations of information have the added property of being conscious. I know you are leery of subjective/objective distinctions, but they are essential to acknowledge this point. In the spirit of our conversation, we can think of subjective phenomena as being subsets of objective phenomena-- what matters is that we can say for some physical systems that there exists a subjective element (eg human brains) and for others there does not (eg computer monitors).
So you are referring to the processing center as "subjective elements"?
btw, I want to avoid falling into the homunculun trap so let's clarify further about the distinction between the monitor and subjective experience. The monitor is an
output system, that exists for the benefit of a "viewer". However, no such "viewer"
or output system can exist in the human brain (otherwise one gets infinite regress...the homonculun problem). Thus, saying "I have a picture in my mind" or speaking of a continual "narrative" or "display" of consciousness is illogical, since it implies some internal "viewer".
Instead of this - in the MD Theory - there is stimulation, of the same areas that must be stimulated for conscious experience of external phenomena, by the memory.
Basically, although I would rephrase it as such: it's not enough to say so-and-so brain functions are correlated with such-and-such conscious awareness, to any level of detail. We must also understand how and why this correlation exists in the first place.
Maybe an even better way to say it would be: to truly understand consciousness, we must construct a theoretical mapping not just from human brain states to human conscious experiences, but from any arbitrary physical system's states to any arbitrary conscious experience.
Sorry if my little slip-up with the word "produces" drew your suspicions.
On the last sentence of the second paragraph (quoted) did you mean arbitrary
unconscious experience?
You know, I think that this problem - which you correctly see as vital - may be resolved in determining which parts of the brain participate in the question/answer cycles. After all, there are many parts that are just don't do this, and the fact that they don't participate in this is what makes them "unconscious" processes.
Let me clarify on that: When I say "participates in the question/answer cycle" I mean the question/answer cycle that begins with external stimulus.
Well, maybe I am misunderstanding you, but it seems that with blindsight we clearly have a contradiction to this Q/A theory. Neural systems involved with visual processing are engaged in a "question/answer" game insofar as they collect input from the external world, meaningfully process it, and share this processed information meaningfully with further neural systems which go on to guide behavior back out in the external world (eg information is meaningfully shared with the motor cortex to guide a hand to reach out and pick up a cup, or with Broca's area to answer a question about the environment). In spite of all this, the Q/A activity of these visual neural systems with other neural systems does not have any attendant conscious visual awareness for those 'blind' portions of blindsight.
But it did participate with
memory, and that's what's really important. The fact these people are impared in their visual centers' abilities to "question" the memory, in certain instances, should be what accounts for "blindsight".
g2g, sorry. Time's up, but I think there is more to be said.