Hetero phenomenology definition in philosophy

AI Thread Summary
Dennett's heterophenomenology is defended as a comprehensive methodology for studying consciousness, asserting that no opposing philosopher has proposed an experiment that cannot be conducted within its framework. Critics argue that while heterophenomenology interprets behavior and subjective reports, it may not fully account for the essence of consciousness itself, as it treats beliefs and experiences as abstractions rather than acknowledging their intrinsic qualities. The discussion touches on the limitations of third-person methods in addressing subjective experiences, raising questions about the validity of first-person scientific methods. The debate also highlights the tension between Dennett's eliminativist stance and the antiphysicalist perspective, which emphasizes the significance of inner experiences. Ultimately, the conversation reflects ongoing philosophical challenges in reconciling subjective consciousness with objective scientific inquiry, suggesting that while heterophenomenology offers valuable insights, it may not provide a complete understanding of consciousness.
  • #51
Ultimately it should be the experience we are trying to explain. However, as you say in the previous para., we cannot study experiences unless they are our own. The 'other minds' problem shows this clearly. Heterophenomenology therefore cannot study experiences, only reports and beliefs.

We can study other people's experiences only indirectly via their reports.
But if we tak their reports at face value, by default, we have no reason to
doubt that they have experiences at all, and the Other Minds so-called problem does not arise.

[ heterophenomenology ] only says that our data set cannot include the experiences themselves. Why? Because there is no way for us to catalogue actual experiences.

Our data set cannot include other people's expriences. However, we can only
make sense of their reports in terms of our experiences. If you have never banged your own elbow, you arenot going ot understand your experimental
subject's report about banging her elbow. We have to go on other people's
reports when studying other peoples' experience, and we have to go on
our own experience as well, or we could not make sense of it.
 
Last edited:
Physics news on Phys.org
  • #52
Canute said:
In my view it is utterly oxymoronic to say that we can believe that we are having an experiences we are not having. If you think that this is possible then we do not mean the same thing by term 'experience' or 'believe'. On zombies, I believe that it is possible to hypothesise the existence of beings who act like us but are not conscious, and that it is a useful thought experiment since it shows that we are not zombies. However I do not believe any such thing can actually exist.

But, if a zombie did exist, and if "belief" were simply a disposition to hold to a certain point more strongly than others (clearly explanable within a neurological framework (read William H. Calvin, Joseph LeDoux, Gerald Edelman, etc)), then the zombie would be able to believe that he had conscious experiences. He would hold to that side of the argument with intensity and incorrigibility. He would be you.

In short: belief is a neural action sometimes expressed in verbal reports; zombies have "action-consciousness", pace Chalmers; therefore a zombie can believe that he has conscious experience.
 
  • #53
loseyourname,

I'm not sure exactly what we're arguing here. Are you claiming that:
1. There is no hard problem of consciousness
2. It can be solved by heterophenomenolgy
3. It can't be solved by any method, but heterophenomenolgy will get us as close as we'll ever get.
I think you probably fall into the third category, where as Dennett is in either the first or second. I would put myself in the fourth category:
4. It is impossible to know whether a given question can be answered or not, and we should not give up just because we can't think of a way right now.
Obviously no one's going to give up on trying to solve the hard problem just because Dennett is saying we should, so this debate isn't very significant. If another method can be found, it will be, and this will all be put to rest.

So why are people giving up now? Because it certainly seems such a method is impossible, and we'd like to think we can explain everything. Are they right? Well, again, there are a few possible alternatives:


1. Mysterianism - Intrinsic subjective experience (consciousness) is real, and it is what we talk about when we discuss things like the hard problem of consciousness (as opposed to the epiphenomenalist view). But it's existence defies human reason, and maybe even logic, and we will never be able to wrap it up in a neat little explanation like we can for almost everything else.

2. Physical world isn't causally closed - I'm starting to consider this possibiliiy, even though it makes me look a little quacky. Because the other option that takes consciousness seriously, epiphenomenalism, is not just ugly, it's paradoxical. These people suggest consciousness cannot cause. Wait, a second what can't cause? It's absurd. But the physical world not being causally closed is certainly controversial.

3. Epiphenomenalism - I already explained the problems with this view, but it's out there. Maybe I'm missing something about it.

4. Other - this is basically "present day" mysterianism. That is, it is completely beyond our reach now, including even a vage idea how to tackle the problem, but one day we'll have an epiphany. We'll discover that whether the physical world is causally closed is not a yes or no question, or something strange like that.​

Then there's Rosenberg. I honestly can't imagine how his view could maintain physical closure and avoid epiphenomenalism, but I'll have to finish his book and see.

So Dennett denies a natural phenomenon to make his life easier. Chalmers cannot reconcile his views with the rest of science. Neither of them are in enviable positions, but at least Chalmers is being faithful to his duty of finding the truth.

Heterophenomenology itself is fine. You continue to say we aren't arguing Dennett's views when clearly we are. Otherwise, what are we arguing? That we shouldn't accept subjects reports as infallible? That's obvious. That we shouldn't accept our own beliefs? Less obvious, but ok, there are clearly ways we can delude ourselves. But do you really think the entire world is in a mass delusion about consciousness? When you look at the color red, can you really be comfortable saying you're deluding yourself into thinking it has intrinsicness, when it's really just bare differences? Why? Because facing the facts is too hard?
 
  • #54
statusX

To add to your list:

5. There is one world made out of one kind of stuff.
There is a way of describing the world in general, and brains in particual in quantitive structural terms, which we call "physical".
There is a way brains represent theri own activities to themselves
which we call "consciousness".
The physical way of talking is not good at capturing the flavour of consciousness.
But both descriptions overlap; they are not talking about two different
things (no dualism).
Consciousness is a real feature of brains (no eliminativism).
Since the consciousness description and the physical description
overlap, what is happening causally in the physical description is
not happening *instead* of what is happening causally in the consciousness
description -- it is just another way of describing it (no epiphenomenalism).
The two descriptions do not overlap entirely; one brings out things the other
does not (no identity theory).
 
  • #55
Here's a question about consciousness and causality. I am conscious of wanting to raise my arm. My arm does rise. Did my consciousness cause that? Or did some preconscious physical occurrence cause BOTH the want and the rise? Notice the latter explanation would account for the measured gap between the start of the physical chain of events to the rise and the becoming of aware of the desire. This experimental finding has not been refuted, although it has certainly been vigorously attacked.
 
  • #56
selfAdjoint said:
Here's a question about consciousness and causality. I am conscious of wanting to raise my arm. My arm does rise. Did my consciousness cause that? Or did some preconscious physical occurrence cause BOTH the want and the rise? Notice the latter explanation would account for the measured gap between the start of the physical chain of events to the rise and the becoming of aware of the desire. This experimental finding has not been refuted, although it has certainly been vigorously attacked.

That's true, it is very likely physical processes in the brain caused both. However, this interpretation becomes troublesome when applied to judgements about consciousness itself. Was it the experiences that caused me to believe I had experiences, or was it brain states? Dennett looks at this question and says "obviously the brain states, and that explains everything." The problem for those who take the hard problem seriously is this: If consciousness can't cause, how do we know about it? And if the physical world is causally closed, how can consciousness cause? I don't pretend to have the answers to these questions. The reason I believe in the hard problem is nothing I can convince anyone of with words, but it's something I know subjectively, and I'm sure all of you and even Dennett himself know it as well.

The real difficult point (arguably for humanity as a whole, not just philosophers) will be if and when we come to a reductive physical explanation for why we believe in qualia. This explanation will presumably depend on physical reactions among neurons and not require the actual existence of qualia at all. Indeed, it would mean they don't exist. Will we accept this explanation, despite it's deep counterintuitiveness? Or will we reject it, and possibly question the scientific method itself?
 
  • #57
Tournesol said:
statusX

To add to your list:

5. There is one world made out of one kind of stuff.
There is a way of describing the world in general, and brains in particual in quantitive structural terms, which we call "physical".
There is a way brains represent theri own activities to themselves
which we call "consciousness".
The physical way of talking is not good at capturing the flavour of consciousness.
But both descriptions overlap; they are not talking about two different
things (no dualism).
Consciousness is a real feature of brains (no eliminativism).
Since the consciousness description and the physical description
overlap, what is happening causally in the physical description is
not happening *instead* of what is happening causally in the consciousness
description -- it is just another way of describing it (no epiphenomenalism).
The two descriptions do not overlap entirely; one brings out things the other
does not (no identity theory).

That doesn't explain why we believe we are conscious. If the physical brain does all the causing and consciousness is just another "aspect" of that (I'm not really sure what you mean here), then our thoughts about consciousnes, including everything you say here, are caused by our physical brain, and there is no need to refer to anything else, including other "aspects". I think your view is basically epiphenomenalism with some differences in terms.
 
  • #58
This is starting to degrade and we're talking about too many things at once. I'm going to see if we can focus on a few points and move from there.

Canute said:
In my view it is utterly oxymoronic to say that we can believe that we are having an experiences we are not having. If you think that this is possible then we do not mean the same thing by term 'experience' or 'believe'. On zombies, I believe that it is possible to hypothesise the existence of beings who act like us but are not conscious, and that it is a useful thought experiment since it shows that we are not zombies. However I do not believe any such thing can actually exist.

Then you don't believe the Zombic Hunch. If you don't think it's possible for an entity to believe it is having experiences when in fact it is not, the whole zombie argument should be utterly useless to you. Since it is this argument that underpins the objections to heterophenomenology typically given by contemporary philosophers, I'm just going to assume that you don't fall into their camp and you're coming at this from a different angle.

If we cannot know that an experience exists unless it is reported, and an unconscious experience cannot be reported, then how can we claim that we have experiences that we do not experience? It makes no sense at all.

I know it doesn't make any sense. You've studied modern science to some extent. A good deal of it doesn't make a whole lot of sense; fortunately, the threshold conditions for whether or not a given hypothesis can be true does not include it making sense to Canute. Let's go back to the demo that Dennett gave at the debate. The colors changed with each flash, but the audience didn't begin to notice the change until (I don't remember this exactly, but for the sake of argument let's just say) the 12th flash. So the question was raised as the whether or not they experienced the other 11 flashes? According to their reports, they did not, but we know the visual information was there and received by their eyes. So we have two competing hypotheses that might explain this. The Stalinistic hypothesis says that they did not experience the first 11 flashes. In fact, the visual information was altered somewhere along the route from the retinae to the visual cortex. The Orwellian hypothesis says that they did in fact experience the flashes, but the information was altered somewhere along the route from the visual cortex to the memory center in the hippocampus. To the first-person observer, these two situations would be indistinguishable and there is no way of knowing which hypothesis is correct. Fortunately, you are wrong to say that the heterophenomenologist can only study reports - reports are quite useless at this point. In fact, the heterophenomenologist has all the tools of science available to him. This includes the potential ability to track the visual signals as they move from the retinae to the visual cortex and then to the memory-center. As of right now, there exists no way to test these hypotheses and they will have to remain equally probable, but the heterophenomenologist will be able to determine which is correct once he has the proper tools.

Of course, as Dennett points out, a good feature of any scientific conjecture is the ability to predict certain effects. In fact, his multiple drafts model did predict this very effect (Rensink's change blindness) before it was ever found to occur.

If the person did not experience seeing something then they did not experience seeing something. How can this not be true? The fact that some inputs from our senses are received subliminally has no bearing on anything. The fact is that the person did not have the experience of seeing. Quite obviously it is impossible to have an experience that we are not having.

Well, jeez, you just solved one of the great mysteries of neurology. Would you care to submit your findings to the New England Journal of Medicine?

Fine. What follows from this?

It's not entirely certain what follows from this. Just so you're clear on what I'm talking about, this is regarding hemispheric separation. There are actually two hypotheses that can explain this as well. The first would say that the left hemisphere, the hemisphere responsible for verbal reports, is the only hemisphere that actually experiences anything. The reason for guessing this is that the subject speaks using the left hemisphere, so when the subject what he experienced, he will answer that he experienced all of the information that was processed by the left hemisphere. The other hypothesis is that both hemispheres experience equally, but that their experiences are separate from one another; that is, neither hemisphere has access to the experiences of the other. The evidence for this is that the right hemisphere reports all of the information processed by it just as well as the left hemisphere; it just doesn't report it verbally. It seems rather arbitrary to suggest that your true 'self' is simply the part of you that can give verbal reports and that the other 'self' is simply a subliminal zombie.

As of right now, I can't even think of any way to test which of these two hypotheses is correct, so by default it seems that we must grant status to the second, simply because it seems intellectually dishonest to grant privileged status to the part of our brain responsible for verbal reports. I know this is highly counterintuitive and suggests that we may not be a 'self' at all, but rather a collection of separate experiencing parts that simply share information with one another. It's kind of like the double-slit experiment. Is the electron a particle or a wave? There is no way to know. Perhaps the question is absurd and our ideas of "particle" and "wave" are simply not equipped to describe reality. Perhaps our idea of "self" is similar.

I've never thought about that. It seems hard to imagine why any such event should be ineffable, but perhaps. It would piss off neuroscientists, but it may be possible. I can't see the relevance of this point though.

Why would it piss off neuroscientists? There are plenty of objects that are the subject of study by science but which cannot be given a qualitative description. The aforementioned electron, for example. The relevance of this point is that you seemed to be saying if an experience is ineffable, then it cannot be a brain event. I wasn't sure how the consequent followed from the antecendent, so I asked why and you replied that maybe a brain event could be ineffable. Am I to conclude that you take back your earlier conclusion?

The fact that we have experiences at the time that we have them is one of the main reasons that heterophenomenology is useless as a means of studying experiences. All it can study is post-event reports on beliefs. Is watching a foorball game the same as having someone report the game to you?

It just might very well be, if we accept the second hypothesis to explain the hemispheric separation. It might very well be that watching a football game is simply the equivalent of my retinae sharing its reports with my visual cortex, which shares its reports with my memory banks and with the various faculties of my brain responsible for forming different reports, which can then be shared with other brains, either through visual or auditory receptors.

Have you ever tried studying the experiences themselves, as they happen? It can be quite rewarding.

How do you know? As soon as have the experience, it is gone, and you are left with memory. Given that this occurs in a split microsecond, unless you are capable of stopping time, I'm not sure what you mean when you refer to study the "experiences themselves" rather than your memory. In fact, the multiple drafts model (which has made correct predictions, might I add one more time) suggests that it is meaningless to even speak of experiences as discrete, unitary events, that what we term "experiences" are in fact constantly changing and evolving, both in the memory and in other parts of the brain.

Quite so. From a scientific perspective this is true. However it is not true. It is perfectly possible to study experiences, despite the fact that it is not possible to do it scientifically. This is known by everyone. After all, if this were not possible then science would have no reason to conclude that there is any such thing as experiences.

I'm not entirely certain science (and I'm completely certain that not "everyone") has come to the conclusion that there is such a thing as experiences in the sense that you seem to be using the term. If by "experience," you mean a discrete quanta of qualitative content that can be taken out of the river of time and looked at distinctly and clearly, then no, I'm not certain such a thing does exist.

Heterophenomenology therefore cannot study experiences, only reports and beliefs.

You seem to misunderstand the concept of what it means to study something. Let's go back to the quantum physicist example, the double slit experiment. The physicist is studying the electron. His data, however, does not include the electron itself, as it is not possible to directly view an electron with any equipment that he has available to him. His data instead includes such things as diffraction patterns on radiosensitive film and such. But he is still studying the experience. By the same token, the heterophenomenologist cannot directly record "experiences," because there is no such device that is capable of doing that. He has as his raw the reports, both verbal and otherwise, along with any neural information he can get from the subject at the time of the report-making. Using this data, he can indirectly study the experiences themselves, which is ultimately what he hopes to explain.

If we define consciousness as 'what it is like' then whatever an experience is like is what the experience is. So Dennett cannot argue that we are not an authority on what an experience is like, since what it is like is all that the experience is.

Actually, Dennett grants full authority over 'what it is like' and says as much in the debate with Chalmers (which you again seem to have misread - his exact words are that he will grant dictatorial authority over 'what it is like'). He is just not so quick to give that simplistic and quite probably incorrect definition of what consciousness is. After all, if all we can say that we experience is 'what it is like' for the verbal reporting part of our brain, what about the other parts? What about the right hemisphere? Are you so quick to arbitrarily declare that there is no such thing as 'what it is like' to be the separated right hemisphere? Why? What about the other parts of your brain that you now call "subliminal?" Are they not conscious simply because they cannot tell us 'what it is like?' If by 'what it is like,' you in fact mean 'what it is like' for every part of your brain, whether you consider it a part of your self or not, how do you then study that, when the part of you that you call "you" apparently doesn't have access to all of the information? What about when what you call "you" becomes two parts, as in the case of hemispheric separation? Was the right hemisphere previously a part of the subject's 'self' and now is not? Is it possible that maybe there never was a self? And perhaps there is no such thing as 'what it is like' to be Canute? Let me guess: none of this makes sense to you, so it must be incorrect.

You must have a strange sense of existence if your experiences are all in the past. Do you have none in the present?

What exactly is the present? I'm not able to quantize and isolate discrete moments in time to analyze them, if that's what you mean.

As for incorrigibilty it is clear that I know precisely at any moment what experience I am having.

This is another example of:

Vast amounts of experimental and clinical data suggest x.
y, however, makes more sense to Canute.
Therefore, y must be correct.

Are you still under the impression that this is a valid argument form? Does it even make a difference to you?

This is why we cannot be sure that we all see the same thing when we see 'green'. This seems obvious and uncontentious.

Perhaps with 'green,' but with 'triangle,' I think the opposite is true. It is quite uncontentious that when you say you experience seeing a triangle, you are having exactly the same experience as me. Perhaps you cannot describe the surroundings equally well, but you can very well describe the triangle. There is nothing ineffable about 'triangleness.' It is simply any three-sided figure, the interior angles of which add up to 180.

It's no good just giving up and saying we cannot do this because they are not inter-subjective. It's a refusal to face the facts.

Nobody says don't study x because it isn't intersubjective. We say don't study x because x cannot be directly apprehended. My guess is that, in most cases, our beliefs about our experiences are mostly correct. We should, however, reserve judgement because we cannot know for certain, especially in light of all the data that suggests we use many illusory concepts when we communicate and think about our consciousness. In light of the fact that you seem completely unwilling to accept any experimental or clinical data that runs counter to your intuition, is it really fair for you to suggest that heterophenomenologists are the ones refusing to face the facts?

If you want an in depth explanation I'd suggest reading 'Abhidhamma Studies' by the Venerable Nyananponika Thera. It's heavy going but if you can handle Rosenberg's book then you'll catch the gist of it. It is an explanation of the nature of consciousness and a detailed analysis of its causes and constituents, without a single mention of beliefs and reports.

Does the Venerable tackle any of the issues raised by the experimental and clinical data that has been brought to light by the heterophenomenologists or such models as the tensor network and multiple drafts?

As to qualia I feel that we already have a perfectly good definition, which is why the term is widely used.

What is the definition you use, then? When you use the term "qualia," what is it that you referring to?

To get started I'd suggest the one that Descartes used. Or stick a pin in your foot.

That isn't exactly what I meant by demonstration. What I meant was is there any way you think of to demonstrate to me that you are conscious. I don't question that fact, but still. Although, given that you think zombies are impossible, and could not believe they were having experiences unless they actually were, then the fact that you believe and report to me that you have experiences should suffice as proof for you.

I did not say this. I said that it was not the aim of heterophenomenology to explain experiences.

This is a blatantly, factually incorrect statement. In light of the many quotations I have provided in which Dennett says that it is his aim (and the aim of heterophenomenology) to explain experiences, why do you insist that the opposite is true? I can see how you might claim that the heterophenomenologist cannot explain experiences, but to continue to say at this point that such a pursuit is not his aim is to just flat out lie.

Well, Dennett does say this. He says we must start from reports and work backwards from there to beliefs.

God, Canute, do you read these passages at all? Dennett says that we work to beliefs as the primary pretheoretical data. The ultimate knowledge sought, the post-theoretical data, are the experiences themselves. If you don't think he can get there, fine. But don't lie to us and pretend that he makes no attempt.

Perhaps also you might explain why a zombie would report experiences that by definition it has never had, and what sort of form these reports might take. Btw the zombie argument should not be used as if zombies can exist, that is to misunderstand their legitimate use in certain thought experiments and arguments.

Oh god. Is this another "It's logically possible but not empirically possible" deals? Kind of like 'Last Thursdayism' is logically possible, so evolution by natural selection has some serious explaining to do. Or what about this one: It is logically possible that no human being but me has a head. I could, in fact, be deceived by all of my senses into believing that other humans have heads, but in fact, they do not. Therefore, human anatomy as we know it is an imcomplete science because it cannot demonstrate that humans actually do have heads.

Let's be clear on this. If you think it is empirically impossible for an entity to be physically indentical to you, and behave in exactly the same way, express all of the same beliefs, but not have experiences, then you think there is nothing to experience but physicality.
 
  • #59
StatusX said:
Was it the experiences that caused me to believe I had experiences, or was it brain states? Dennett looks at this question and says "obviously the brain states, and that explains everything."

Actually, what Dennet would say is that you haven't asked a question at all. He believes that experiences are brain states, and so you just essentially asked "Was it the brain-states that casued me to believe I had a brain-state, or was it the brain-states?" A rather absurd question if you take that as a possibility. It seems that you don't.
 
  • #60
loseyourname said:
Actually, what Dennet would say is that you haven't asked a question at all. He believes that experiences are brain states, and so you just essentially asked "Was it the brain-states that casued me to believe I had a brain-state, or was it the brain-states?" A rather absurd question if you take that as a possibility. It seems that you don't.

Experiences and brain states are certainly not a priori identical. So the question "Are experiences nothing more than brain states?" is very meaningful, it's just that Dennett feels he has already answered it.

Would you agree that a neurological explanation of experience, like one heterophenomenology could give us, would consist of nothing more than bare differences? That is, there would be slots in the theory for "the subjective experience of green" and "the subjective experience of red", but these would consist only of their causal roles and the fact that they're different from each other. If so, the question is, does that cover everything there is to know about experience? I don't think it does.

We know more than "red is different than green, and it causes me to say different things." We know what red looks like and what green looks like. The theory might mention that we believe they look a certain way, but that hardly tells us what they look like or why they look like anything at all. The only way out would be to assume the beliefs are false, but how could the theory ever demonstrate that? Are you saying the theory will never be able to answer such question, but neither will any other, or are you saying such questions are meaningless because that beliefs are false?
 
  • #61
StatusX said:
loseyourname,

I'm not sure exactly what we're arguing here. Are you claiming that:
1. There is no hard problem of consciousness
2. It can be solved by heterophenomenolgy
3. It can't be solved by any method, but heterophenomenolgy will get us as close as we'll ever get.
I think you probably fall into the third category, where as Dennett is in either the first or second. I would put myself in the fourth category:
4. It is impossible to know whether a given question can be answered or not, and we should not give up just because we can't think of a way right now.

As a good heterophenomenologist, I'm not taking a stand on this matter. I will remain neutral until I can be convinced that one of those is true. That I do believe is the correct stance to take. I'm not entirely certain that Dennett falls into the 1st or 2nd categories either. What exactly is meant by the 'hard problem?' Because, as I've seen it formulated, it is basically the question of why there should be experiences at all, correct? This is a question that can in principle be answered through computer science, with AI. If we ever manage to develop a non-organic entity that can experience, we can then determine the threshold conditions. If it becomes obvious that this cannot be done, then we will need to conclude that there is indeed a non-physical aspect that we just cannot recreate.

There is a slight quagmire, however. If we assume that Rosenberg's model is true, then an exact physical replica of the human brain in computer form would experience in the same way we do, but there would still be non-physicality involved. The thing is, his theory is purely rational. There is no empirical basis whatsoever and so no way of evaluating it except to say that it seems elegant and answers my questions so 'what the heck?' If such a theory is the only way to answer the question "Why is there experience at all?" Then yes, I don't think the question can be answered, because let's face it, that isn't an answer. It's conjecture that seems to make sense and nothing more. Aristotle might accept it, but I will not.

So why are people giving up now? Because it certainly seems such a method is impossible, and we'd like to think we can explain everything. Are they right?

I think it's wrong to say that people are "giving up." Heterophenomenology is simply a method for scientists, and it lays out the proper stance that a scientist should take, because - as I think Dennett demonstrates - it is the only stance he can take that guarantees he won't end up with spurious data. Philosophers, on the other hand, are free to use rationalism as they always have and attempt to demonstrate true thought experiment the truth of a given hypothesis. As he says, Einstein's theory of relativity came very close to being pure philosophical speculation. Maybe someone even more brilliant than Einstein will be able to do it. Chalmers is not that man, however.

Well, again, there are a few possible alternatives:


1. Mysterianism - Intrinsic subjective experience (consciousness) is real, and it is what we talk about when we discuss things like the hard problem of consciousness (as opposed to the epiphenomenalist view). But it's existence defies human reason, and maybe even logic, and we will never be able to wrap it up in a neat little explanation like we can for almost everything else.

2. Physical world isn't causally closed - I'm starting to consider this possibiliiy, even though it makes me look a little quacky. Because the other option that takes consciousness seriously, epiphenomenalism, is not just ugly, it's paradoxical. These people suggest consciousness cannot cause. Wait, a second what can't cause? It's absurd. But the physical world not being causally closed is certainly controversial.

3. Epiphenomenalism - I already explained the problems with this view, but it's out there. Maybe I'm missing something about it.

4. Other - this is basically "present day" mysterianism. That is, it is completely beyond our reach now, including even a vage idea how to tackle the problem, but one day we'll have an epiphany. We'll discover that whether the physical world is causally closed is not a yes or no question, or something strange like that.​

Then there's Rosenberg. I honestly can't imagine how his view could maintain physical closure and avoid epiphenomenalism, but I'll have to finish his book and see.

All of these are options, but why exactly is it that you think an empirical investigation using the heterophenomenological stance cannot give us any useful information to steer us in the right direction?

So Dennett denies a natural phenomenon to make his life easier. Chalmers cannot reconcile his views with the rest of science. Neither of them are in enviable positions, but at least Chalmers is being faithful to his duty of finding the truth.

What is it that you think Dennett has denied? His attempt seems to me to be an attempt at reduction, not elimination. I suppose I could be wrong (although I e-mailed him and he told me I was faithfully representing his views), but why does that matter when this thread is about heterophenomenology? The stance itself certainly doesn't deny or confirm any phenomena.

Heterophenomenology itself is fine. You continue to say we aren't arguing Dennett's views when clearly we are. Otherwise, what are we arguing? That we shouldn't accept subjects reports as infallible? That's obvious.

Well, gee, I guess we aren't arguing then. This thread is about heterophenomenology. Just because the term was coined by Dennett doesn't mean we need to include his personal views in an examination of the stance. Dennett isn't even a scientist. He just interprets work that is being done by others. He saw that they were using a certain neutral stance and he defended it and invented a name for it.

That we shouldn't accept our own beliefs? Less obvious, but ok, there are clearly ways we can delude ourselves. But do you really think the entire world is in a mass delusion about consciousness? When you look at the color red, can you really be comfortable saying you're deluding yourself into thinking it has intrinsicness, when it's really just bare differences? Why? Because facing the facts is too hard?

I have no clue what the difference would be between observing something that is intrinsic and observing something that is built of bare differences. I certainly won't call either a fact of my experience. It's been chronicled in some depth here exactly why we should give incorrigible status to our own beliefs about our experiences. This doesn't mean that they are all incorrect, however. The ideas of self and of quantized discrete percepts seem illusory, plus there is the practical impossibility of being able to tell first-person the difference between Orwellian and Stalinistic revision. Obviously, we can observe first-person that we do experience and that we can be certain of, but that's about it. You can't seriously try to tell me that you can look at an object of a certain color and know just from looking at it that what you are looking at is not the result of any extrinsic property of the object. What is your perception of green then? Is it an intrinsic propery of leaf matter? Is it an intrinsic property of photons of a certain wavelength? (Does that one even make sense? If the difference between a photon of one color and a photon of another is the wavelength, but intrinsically they are both the same, does that make sense?) Is it an intrinsic property of something in your neurons? Why do you feel that it must be intrinsic?
 
  • #62
Tournesol said:
5. There is one world made out of one kind of stuff.
There is a way of describing the world in general, and brains in particual in quantitive structural terms, which we call "physical".
There is a way brains represent theri own activities to themselves
which we call "consciousness".
The physical way of talking is not good at capturing the flavour of consciousness.
But both descriptions overlap; they are not talking about two different
things (no dualism).
Consciousness is a real feature of brains (no eliminativism).
Since the consciousness description and the physical description
overlap, what is happening causally in the physical description is
not happening *instead* of what is happening causally in the consciousness
description -- it is just another way of describing it (no epiphenomenalism).
The two descriptions do not overlap entirely; one brings out things the other
does not (no identity theory).


StatusX said:
That doesn't explain why we believe we are conscious.

We believe we are because we are; "consc. is a real feature of brains".

If the physical brain does all the causing

There is not a physical brain and a separate non-physical mind.
There is a third person physical description and a 1st person
subjective apprehension which are both of the same
ultimately real entity. The trick is not to equate take 'physical'
to mean 'ultimately real'.


and consciousness is just another "aspect" of that (I'm not really sure what you mean here), then our thoughts about consciousnes, including everything you say here, are caused by our physical brain, and there is no need to refer to anything else, including other "aspects". I think your view is basically epiphenomenalism with some differences in terms.

That would again be because you are taking "physical" to mean "really real".
The brain does what it does causally and we can explain it in
subjective or neruological terms, and both explanations will work.

"what is happening causally in the physical description is
not happening *instead* of what is happening causally in the consciousness
description -- it is just another way of describing it (no epiphenomenalism)."
 
  • #63
Tournesol said:
We believe we are because we are; "consc. is a real feature of brains".

Well that's hardly an answer. I could just rephrase the question to ask why you said that.

"what is happening causally in the physical description is
not happening *instead* of what is happening causally in the consciousness
description -- it is just another way of describing it (no epiphenomenalism)."

The problem is, we believe there is more to consciousness than just "what is happening causally," otherwise there wouldn't be a problem. How does your model explain the fact that we have been made to believe in something that cannot cause? I realized this was the argument we were getting to in this thread, which is pretty off topic from heterophenomenology, so I started a new thread in the metaphysics forum. I explain the problem in more detail there.
 
Last edited:
  • #64
LYM said:
As a good heterophenomenologist, I'm not taking a stand on this matter. I will remain neutral until I can be convinced that one of those is true. That I do believe is the correct stance to take. I'm not entirely certain that Dennett falls into the 1st or 2nd categories either. What exactly is meant by the 'hard problem?' Because, as I've seen it formulated, it is basically the question of why there should be experiences at all, correct?

Well, it is to do with the experiential aspect of consciousness ratehr than the functional, "doing" aspect.

This is a question that can in principle be answered through computer science, with AI.

Huh?? Given that we know we have expriences, why shouldn't we just figure out what
we use them for.

If we ever manage to develop a non-organic entity that can experience,

How would we know it is experiencing ? AFAICS -- you would have to solve
the HP first -- ie understand how material processes generate experience.
 
Last edited:
  • #65
loseyourname said:
All of these are options, but why exactly is it that you think an empirical investigation using the heterophenomenological stance cannot give us any useful information to steer us in the right direction?

I think this is the main point we're arguing, and it doesn't necessarily concern heterophenomenology itself.

Any theory has two parts: a formalism and an interpretation. The formalism is the extrinsic, mathematical model, and the interpretation relates certain aspects of the formalism to reality. Heterophenomenology will be fine for producing a formal theory of consciousness, but we are still free to decide how to interpret it. This choice is based on such factors as internal consistency and elegance.

The question of which of these options to pick is, at least partly, an interpretational issue. For example, we can't pick an option where we believe in something that cannot affect our beliefs, because that isn't consistent. But if we can find an interpretation that is consistent, and rides well with both the formalism we get from heterophenomenology and our first person experience, we will probably have done as much as we can do. Which interpretation is correct and why is philosophy, not science.
 
  • #66
statusx said:
Well that's hardly an answer.

Why not ? I believe there is a computer in front of me because there
is. I suppose there are some other condintions -- my eyes are open,
I can see, etc. Consciousness is self-awareness, I am aware of myself,
so I am conscious. If you are into eliminativism, you are going to
have to give a convoluted answer to that kind of question, but I am not,
so I can stick with the kind of simple answer that is good for most things.

The problem is, we believe there is more to consciousness than just "what is happening causally," otherwise there wouldn't be a problem. How does your model explain the fact that we have been made to believe in something that cannot cause?

I don't believe that consciousness "cannot cause"; I am giving an explanation of why it can. Epiphenomenalists believe consc. cannot cause because
they think physical explanations exclude mental ones; I don't.

I notice that in both of these comments you seem to assume that the
only thing to explain is the belief in consciousness and that it needs to be explained as some kind of myth. somewhat question begging, perhaps ?
 
Last edited:
  • #67
Tournesol said:
I don't believe that consciousness "cannot cause"; I am giving an explanation of why it can. Epiphenomenalists believe consc. cannot cause because
they think physical explanations exclude mental ones; I don't.

You're missing the point. If the causal nature of consciousness was all there was, there would be no problem. And there wouldn't be two aspects to it, there would just be it's causal role. The reason we make the first person distinction is that there is more to experiences then what they cause us to do. There is "something it is like" to have an experience. Staring at a color, it actually looks like something. It isn't just defined by what it causes you to say or think. Do you see the diffference? If you really do intend to say consciousness is nothing more than it's causal role, you in fact are an eliminativist.
 
  • #68
loseyourname said:
This Then you don't believe the Zombic Hunch. If you don't think it's possible for an entity to believe it is having experiences when in fact it is not, the whole zombie argument should be utterly useless to you. Since it is this argument that underpins the objections to heterophenomenology typically given by contemporary philosophers, I'm just going to assume that you don't fall into their camp and you're coming at this from a different angle.
Zombie do not believe anything. Believing requires consiousness, just as knowing does. By your presentation of the argument zombies are no different to human beings. This is a misunderstanding of zombies as used by Chalmers etc. Zombies are defined as behaving like us, but they are not conscious. If they are able to believe things and to believe that they are having experiences then human beings are zombies and we don't have to argue about whether they can exist or not, we know they do.

The Orwellian hypothesis says that they did in fact experience the flashes, but the information was altered somewhere along the route from the visual cortex to the memory center in the hippocampus. To the first-person observer, these two situations would be indistinguishable and there is no way of knowing which hypothesis is correct. Fortunately, you are wrong to say that the heterophenomenologist can only study reports - reports are quite useless at this point. In fact, the heterophenomenologist has all the tools of science available to him. This includes the potential ability to track the visual signals as they move from the retinae to the visual cortex and then to the memory-center. As of right now, there exists no way to test these hypotheses and they will have to remain equally probable, but the heterophenomenologist will be able to determine which is correct once he has the proper tools.
This is all irrelevant. There's no need for an Orwellian hypothesis. Some flashes were experienced and some weren't. I suppose some people would like to know why this happens, but it has no bearing an this discussion. We are disussing how to explain why or how subjects experience things, not why they sometimes don't. Clearly if they don't experience something then it was not an experience, and we are supposed to talking about experiences not non-experiences.

Of course, as Dennett points out, a good feature of any scientific conjecture is the ability to predict certain effects. In fact, his multiple drafts model did predict this very effect (Rensink's change blindness) before it was ever found to occur.
An important result. When we don't pay attention to something we tend not to notice it. I could have predicted that given two minutes to think about it.

Well, jeez, you just solved one of the great mysteries of neurology. Would you care to submit your findings to the New England Journal of Medicine?
I haven't solved anything. I am talking about the definition of an experience. I define it as something we experience. If you wish to define an 'experience' as something that we may or may not experience then I can't stop you. However as far as I'm concerned an experience is an experience, and until somebody shows that we can have experiences that we don't experience I'm sticking to this definition. Do you define zombies as creatures who don't have experiences, or as creatures who have experiences that they don't experience? How would one tell the difference?

Just so you're clear on what I'm talking about, this is regarding hemispheric separation. There are actually two hypotheses that can explain this as well. The first would say that the left hemisphere, the hemisphere responsible for verbal reports, is the only hemisphere that actually experiences anything. The reason for guessing this is that the subject speaks using the left hemisphere, so when the subject what he experienced, he will answer that he experienced all of the information that was processed by the left hemisphere. The other hypothesis is that both hemispheres experience equally, but that their experiences are separate from one another; that is, neither hemisphere has access to the experiences of the other. The evidence for this is that the right hemisphere reports all of the information processed by it just as well as the left hemisphere; it just doesn't report it verbally.
What's the relevance of this? You haven't yet shown that experiences exist in a manner consistent with heterophenomenology, so it's a bit premature to start talking about where they occur. As yet there is no scientific evidence to help us decide where in the brain experiences occur, no scientific evidence that shows that they occur in the brain, and no scientific evidence that they occur at all. Like Dennett you take an enormous amount for granted.

It seems rather arbitrary to suggest that your true 'self' is simply the part of you that can give verbal reports and that the other 'self' is simply a subliminal zombie.
I'd say it was absurd, which is why I didn't say it.

As of right now, I can't even think of any way to test which of these two hypotheses is correct, so by default it seems that we must grant status to the second, simply because it seems intellectually dishonest to grant privileged status to the part of our brain responsible for verbal reports.
As we cannot and will never observe experiences happening in brains then according to heterophenomenology we must simply ask the subject where there experiences occur. Do you not see that heterophenomenology cannot address these questions precisely because as a method it can tell us nothing about experiences? Online somewhere is an email discussion between Dennett and John Searle, where Searle argues that Dennett's approach leaves out what it is supposed to explain. Dennett's answers are inneffectual.

I know this is highly counterintuitive and suggests that we may not be a 'self' at all, but rather a collection of separate experiencing parts that simply share information with one another.
We now that consciousness is experienced as unified. Hence the binding problem.

It's kind of like the double-slit experiment. Is the electron a particle or a wave? There is no way to know. Perhaps the question is absurd and our ideas of "particle" and "wave" are simply not equipped to describe reality. Perhaps our idea of "self" is similar.
I'd say you are right on this point.

Why would it piss off neuroscientists? There are plenty of objects that are the subject of study by science but which cannot be given a qualitative description.The aforementioned electron, for example. The relevance of this point is that you seemed to be saying if an experience is ineffable, then it cannot be a brain event. I wasn't sure how the consequent followed from the antecendent, so I asked why and you replied that maybe a brain event could be ineffable. Am I to conclude that you take back your earlier conclusion?
Science does not give qualitative descriptions, which is Rosenberg's point, so I'm not sure what you're saying here.

I didn't mean to suggest that an ineffable experience cannot be a brain-state because it is inneffable, but because it is an experience. If an experience was a brain-state then we'd call it a brain-state and not an experience. In the end all experiences are inneffable to a greater or lesser degree. Try describing what green looks like to a blind man, or better still to a zombie.

It just might very well be, if we accept the second hypothesis to explain the hemispheric separation. It might very well be that watching a football game is simply the equivalent of my retinae sharing its reports with my visual cortex, which shares its reports with my memory banks and with the various faculties of my brain responsible for forming different reports, which can then be shared with other brains, either through visual or auditory receptors.
No offence, but if you don't mind I'll drop out of this discussion before it becomes any more surreal.
 
Last edited:
  • #69
There is "something it is like" to have an experience. Staring at a color, it actually looks like something. It isn't just defined by what it causes you to say or think. Do you see the diffference? If you really do intend to say consciousness is nothing more than it's causal role, you in fact are an eliminativist.

I didn't say that. Consciousness can cause does not mean "there is nothing
more to consciousness than its causal role". And "consciousness can cause" includes qualia.
 
Last edited:
  • #70
Tournesol said:
I didn't say that. Consciousness can cause does not mean "there is nothing
more to consciousness than its causal role". And "consciousness can cause" includes qualia.

It seems silly to have the same argument in two different threads, so I'll respond to this point in the other thread.
 
  • #71
Canute said:
Zombie do not believe anything. Believing requires consiousness, just as knowing does. By your presentation of the argument zombies are no different to human beings. This is a misunderstanding of zombies as used by Chalmers etc. Zombies are defined as behaving like us, but they are not conscious. If they are able to believe things and to believe that they are having experiences then human beings are zombies and we don't have to argue about whether they can exist or not, we know they do.

But just because you don't like the logical conclusion of a certain line of reasoning, doesn't mean that that line of reasoning is wrong.

Belief, as I've stated before, can be (and has been; numerous times, numerous ways) explained in terms of a neural activity. Thus, "belief" would fall into the realm of "a-consciousness", and we would be able to believe that we had "p-consciousness" even if we were zombies.

This is all irrelevant. There's no need for an Orwellian hypothesis. Some flashes were experienced and some weren't. I suppose some people would like to know why this happens, but it has no bearing an this discussion. We are disussing how to explain why or how subjects experience things, not why they sometimes don't. Clearly if they don't experience something then it was not an experience, and we are supposed to talking about experiences not non-experiences.

The fact that the same thing can occur (a flash of light, photons entering the retina, etc), without the subject claiming "conscious experience" of it, is indeed relevant.

An important result. When we don't pay attention to something we tend not to notice it. I could have predicted that given two minutes to think about it.

But is "paying attention" a vital part of conscious "experience"? If not, then why was it the deciding factor in this experiment (as to whether the flashes were "consciously experienced" or not)? If so, then it is indeed a relevant prediction.

I haven't solved anything. I am talking about the definition of an experience. I define it as something we experience.

That's self-referential nonsense, Canute (no offense). How can you expect anyone to argue with a "bligs are something we blig, of course" argument? Or to even take it seriously?

What's the relevance of this? You haven't yet shown that experiences exist in a manner consistent with heterophenomenology, so it's a bit premature to start talking about where they occur.

This is, once again, indicative of your misunderstanding heterophenomenology. Heterophenomenology does not explain "in what manner experiences exist", it is a method for scientific understanding of verbal reports about consciousness.

No offence, but if you don't mind I'll drop out of this discussion before it becomes any more surreal.

Well, no one's going to stop you, but I would very much prefer it if you didn't leave just yet. If you do, it will certainly not be the first time that you've done that (on one of my threads, no less), and I find it to be as insulting as it is unfortunate. If you're not going to fully flesh out your arguments (and the arguments of others), why even begin discussing?
 
  • #72
I just came up with a thought experiment that might help demonstrate the point. Let me know if this is helpful to anyone: (by the way, this is sort of a continuation of the Mary character used by Jackson, but I have only heard that argument second hand, so I apologize for any inconsistencies)

Mary, the famous neuroscientist who only saw red when she was an adult, has had a child, Michael. Being so deprived her whole life, she has become bitter at the world, and has taken it out on her son - she doesn't let him see any colors, just like her. She tells him about the colors, about how the sky is blue and bananas are yellow. But she purposely tells him nothing about the colors red and green; she wants them to be a suprise.

Then one day, she decides it's time for Michael to learn about these colors, but she reveals them to him in a peculiar way. She has prepared a room with a white floor and ceiling and walls of alternating vertical red and green stripes. She brings Michael into the room and let's him look around for a few minutes, telling him the names of the two colors, but not which is which. Then she brings him back to their discolored home.

Now we subject Michael to a heterophenomenological examination. What will we find? He knows red and green are different. Is that it? If you showed him a red slide, could he tell you which it was? No. If you asked him which is the color of an apple, could he answer? No. If you asked him which color was warmer, could he say? This is harder, since it is possible we have innate responses to colors, but assume for simplicity Michael has no such inborn traits. If so, then no, since he doesn't know what colors tend to accompany heat.

So all he knows is that they're different. What about his behavior? If shown one of the colors and asked a question, is there any possible way he could give an answer that depended on which color was being shown? Is there any way his behavior in general could be different as a result of which color he was seeing (again, assume he has no innate response to colors)? Presumably not, which means the colors play the exact same functional role.

All we have found is that a) he knows red and green are different and b) they play the same causal role. In other words, heterophenomenology has told us they are completely symmetric, according to Michael.

Now Michael is finally let out into the colorful world. According to the model we have just made of "the world accornding to Michael", we could switch every instance of red and green and absolutely nothing would be different to him. But cleary, something is different. What has changed? The experiences are different. The experience that he calls "red" in the red-green world corresponds to "green" in the green-red world. There is clearly a natural difference to Michael between the two worlds, but heterophenomenolgy completely fails to account for it.
 
  • #73
StatusX,
Michael can only tell you that "red" is different from "green", in the first place, because he knows that there are two colors, and he knows that their names are "red" and "green". Unless he were capable of physically distinguishing between the two, he would have to disbelieve his mother about there being two colors. As it is, he can physically distinguish between the two colors (though not knowing which is which), and so he agrees that there are indeed "red" and "green" stripes.

Now, when he gets out into the world, and see the colors for himself, he will still be able to distinguish between them. He won't know which is which (but then, he didn't know that to begin with...and neither did you, until someone told you), but the process of distinguishing and categorizing is merely a process of computation (no "extra ingredient" required).
 
  • #74
Mentat said:
Michael can only tell you that "red" is different from "green", in the first place, because he knows that there are two colors, and he knows that their names are "red" and "green". Unless he were capable of physically distinguishing between the two, he would have to disbelieve his mother about there being two colors. As it is, he can physically distinguish between the two colors (though not knowing which is which), and so he agrees that there are indeed "red" and "green" stripes.

Now, when he gets out into the world, and see the colors for himself, he will still be able to distinguish between them. He won't know which is which (but then, he didn't know that to begin with...and neither did you, until someone told you), but the process of distinguishing and categorizing is merely a process of computation (no "extra ingredient" required).

That was my point. A computational model of his brain will only capture the fact that he distinguishes them. So according to this model, nothing would be different to Michael if when he walked out into the world, it was a normal one, or if it was one where stop signs are green and grass is red. The model would predict his inner subjective world would see the two identically. But clearly this isn't true, because his experiences of what he calls red and green would be switched.

If you want to preserve the completeness of heterophenomenology, you have to either claim there really is no difference to Michael between the two worlds, or offer a way heterophenomenology could account for a difference, by finding some asymmetry in the functional role of the colors.
 
Last edited:
  • #75
StatusX said:
That was my point. A computational model of his brain will only capture the fact that he distinguishes them. So according to this model, nothing would be different to Michael if when he walked out into the world, it was a normal one, or if it was one where stop signs are green and grass is red. The model would predict his inner subjective world would see the two identically. But clearly this isn't true, because his experiences of what he calls red and green would be switched.

I don't get this. Michael can ask around and find out which of the two shades he can distinguish is generally called red and which is called green. Asking around doesn't strain the computational-described brain at all, does it? This is how we learn colors ourselves; our brains provide us with distinguishable, recallable entities (congeries of neural activity) and we learn to give these names by social interaction.

Computer/software systems alreay can do what I have described here.
 
  • #76
selfAdjoint said:
I don't get this. Michael can ask around and find out which of the two shades he can distinguish is generally called red and which is called green. Asking around doesn't strain the computational-described brain at all, does it? This is how we learn colors ourselves; our brains provide us with distinguishable, recallable entities (congeries of neural activity) and we learn to give these names by social interaction.

Computer/software systems alreay can do what I have described here.

I admit, it isn't worded very clearly. I meant if you had put him in an inverted red-green world instead, where everyone still calls grass green, etc, it would be no different to him than if you had put him in a regular world.

Maybe it's more clear if instead, you put him in a normal world for a day, erase his memory, and then put him in a red-green inverted world. Of course he won't notice a difference, but is it still a meanigful question to ask if there is one? (this subtlety is part of the reason I didn't go with this scenario first) And if so, is there a difference?
 
  • #77
The ideal heterophenomenological neuroscientist (which can exist, since this is only a thought experiment) would look for the neural response that accompanies the human experience of "green" and "red" and be able to identify them in your hypothetical Michael. Even if he didn't know what they were, the ideal neuroscientist would. That is precisely why heterophenomenology does not restrict itself to using only verbal and behavioral reports as primary data.

Let me reformulate your thought experiment. Imagine Michelle, Michael's sister. She has been raised completely without any knowledge of geometry. One day, after turning 10, Michelle is shown a sheet of paper with two figures on it, a circle and an ellipse. When the heterophenomenologist asks Michelle what she sees, she cannot answer. She doesn't know the words "circle," "ellipse," "flatter," "round," or any other word that refers to traits of conic sections, nor has she ever seen any of these things (just as Michael doesn't know "hue," "warmth," etc.). When formulated thus, do you still think this is a failing of heterophenomenology?
 
  • #78
loseyourname said:
The ideal heterophenomenological neuroscientist (which can exist, since this is only a thought experiment) would look for the neural response that accompanies the human experience of "green" and "red" and be able to identify them in your hypothetical Michael. Even if he didn't know what they were, the ideal neuroscientist would. That is precisely why heterophenomenology does not restrict itself to using only verbal and behavioral reports as primary data.

Yes, but does heterophenomenology include the specific neural responses as part of that set S of subjective data? Michael doesn't know what his neurons are doing, so how could that be considered part of his subjectivity?

Let me reformulate your thought experiment. Imagine Michelle, Michael's sister. She has been raised completely without any knowledge of geometry. One day, after turning 10, Michelle is shown a sheet of paper with two figures on it, a circle and an ellipse. When the heterophenomenologist asks Michelle what she sees, she cannot answer. She doesn't know the words "circle," "ellipse," "flatter," "round," or any other word that refers to traits of conic sections, nor has she ever seen any of these things (just as Michael doesn't know "hue," "warmth," etc.). When formulated thus, do you still think this is a failing of heterophenomenology?

Just because she doesn't know those words doesn't mean she couldn't explain the difference. She could motion with her fingers, or draw them. But there is no way to explain colors besides identifying them with objects (red is the color of a stop sign) or by words we've associated with certain colors (red is the warmest color). That is precisely the trouble that expereinces present, because they can't be defined completely in terms of structure or function. While two shapes with the same structure necessarily are identical, two colors with the same structural and functional roles (red and green in this example) are not necessarily identical.
 
  • #79
StatusX said:
Yes, but does heterophenomenology include the specific neural responses as part of that set S of subjective data? Michael doesn't know what his neurons are doing, so how could that be considered part of his subjectivity?

Heterophenomenology considers all relevant data, including neural responses, behavioral responses, and verbal reports.

Just because she doesn't know those words doesn't mean she couldn't explain the difference. She could motion with her fingers, or draw them. But there is no way to explain colors besides identifying them with objects (red is the color of a stop sign) or by words we've associated with certain colors (red is the warmest color). That is precisely the trouble that expereinces present, because they can't be defined completely in terms of structure or function. While two shapes with the same structure necessarily are identical, two colors with the same structural and functional roles (red and green in this example) are not necessarily identical.

If Michelle can draw another circle and another ellipse to explain the difference between a circle and an ellipse, then Michael can paint one sheet of paper green and the other red to explain the difference between green and red. Either way, they're just referring to what they've seen by relating it to something else they can see. This experiment can be twisted to include any manner of objects or qualities that a human can visually perceive.

By the way, what makes you think that the colors green and red have the same structural/functional roles and more than circles and ellipses do? Green and red result from different wavelengths of light and evoke different nervous responses. I've never understood why antiphysicalist arguments are so obsessed with color.
 
  • #80
loseyourname said:
If Michelle can draw another circle and another ellipse to explain the difference between a circle and an ellipse, then Michael can paint one sheet of paper green and the other red to explain the difference between green and red. Either way, they're just referring to what they've seen by relating it to something else they can see. This experiment can be twisted to include any manner of objects or qualities that a human can visually perceive.

By the way, what makes you think that the colors green and red have the same structural/functional roles and more than circles and ellipses do? Green and red result from different wavelengths of light and evoke different nervous responses. I've never understood why antiphysicalist arguments are so obsessed with color.

I can write down the mathematical formulae for a cricle and an ellipse.
I know of no such formulae for the way-red-seems and the-way-green-seems. Do you ? (NB-- not talking wavelengths).
 
  • #81
loseyourname said:
Heterophenomenology considers all relevant data, including neural responses, behavioral responses, and verbal reports.

This experiment is ideal, and I'm saying that the functional roles of red and green are identical. That means that the neural circuits that cause red reactions and the ones that cause green reactions could be interchanged, and Michael would still behave the same. Whether his experiences would be the same is a further question, and to even ask it would be to assume the incompleteness of heterophenomenology.

If Michelle can draw another circle and another ellipse to explain the difference between a circle and an ellipse, then Michael can paint one sheet of paper green and the other red to explain the difference between green and red. Either way, they're just referring to what they've seen by relating it to something else they can see. This experiment can be twisted to include any manner of objects or qualities that a human can visually perceive.

How could you meaningfully replace circles and ellipses for red and green in this experiment? That would mean you'd have to create a world where circles and ellipses are switched, but every structural and functional property is unchanged. That is logically impossible.

By the way, what makes you think that the colors green and red have the same structural/functional roles and more than circles and ellipses do? Green and red result from different wavelengths of light and evoke different nervous responses. I've never understood why antiphysicalist arguments are so obsessed with color.

In general they don't. But the point of this experiment was to point out that heterophenomenology says that the only reason we see red as different than green is because they have different functional roles. If their functions were identical, they would be interchangeable. But we (or at least I) know from experience that isn't true, that red and green are different because they look different. In other words, heterophenomenolgy is built from bare differences, and experiences are not.
 
Last edited:
  • #82
To Dennett's CADBLIND machine, which can input frequency information from a screen and store it internally as numbers, the red and green in the example would be two different numbers. It compares by subracting the numbers; if it gets a nonzero difference, they are different. The machine could answer the question that they are different, and the only thing it would need to answer more questions is a vocabulary. Function has nothing to do with it. Apples of identical shapes might be red or green.
 
  • #83
StatusX said:
This experiment is ideal, and I'm saying that the functional roles of red and green are identical. That means that the neural circuits that cause red reactions and the ones that cause green reactions could be interchanged, and Michael would still behave the same.

So you start off with an impossible situation. Okay, let's see where you're going.

How could you meaningfully replace circles and ellipses for red and green in this experiment? That would mean you'd have to create a world where circles and ellipses are switched, but every structural and functional property is unchanged. That is logically impossible.

Why? Where is the contradiction in saying that Michelle's neural circuits can't be switched so that every time she is shown a circle, she sees an ellipse? She would still behave the same, and you'd still have the same problem.

In general they don't. But the point of this experiment was to point out that heterophenomenology says that the only reason we see red as different than green is because they have different functional roles.

No it doesn't. Heterophenomenology says that we should remain neutral about whether our subjects beliefs about their experiences are correct until their claims can be verified in some way. It isn't a model of consciousness or cognition; it's just a methodology.

If their functions were identical, they would be interchangeable. But we (or at least I) know from experience that isn't true, that red and green are different because they look different. In other words, heterophenomenolgy is built from bare differences, and experiences are not.

Please rephrase that. I have no clue whatsoever what you are trying to say here.
 
  • #84
Tournesol said:
I can write down the mathematical formulae for a cricle and an ellipse.
I know of no such formulae for the way-red-seems and the-way-green-seems. Do you ? (NB-- not talking wavelengths).

Can you write down formulae for the way an ellipse seems or the way a circle seems? You completely missed my point anyway. It was only that if Michelle had no concept of these things (she doesn't know any geometry and can't write formulae), they would be foreign to her and she would have no way of describing the difference between a circle and an ellipse, any more than Michael could describe the difference between green and red.
 
  • #85
loseyourname said:
So you start off with an impossible situation. Okay, let's see where you're going.

Please explain which part of it is impossible. If you really think this is true, you shouldn't have tried to see where I was going, you should have stopped right here and explained why the premise is flawed.

Why? Where is the contradiction in saying that Michelle's neural circuits can't be switched so that every time she is shown a circle, she sees an ellipse? She would still behave the same, and you'd still have the same problem.

How could she be made to respond to circles in the same way she responds to ellipses? They have different structures. A simple explanation of what an ellipse is could verify that her perception is wrong. With colors, there is no way to determine that what Michael is seeing is really red, or if he sees it as green but calls it red.

Of course, a heterophenomenologist would probably say there is no meaningful distinction here. But this experiment, just like any other argument to take the hard problem seriously, can only help us look to our own experiences for evidence. We know that red looks like something, and could logically imagine it looking like something else, or nothing at all, and still filling the same causal roles. All I can do is point out exactly what you're denying when you say heterophenomenology is complete. A perfectly consistent theory could probably be constructed that ignores experience and explains all behavior, but it would be wrong, or at least incomplete.

No it doesn't. Heterophenomenology says that we should remain neutral about whether our subjects beliefs about their experiences are correct until their claims can be verified in some way. It isn't a model of consciousness or cognition; it's just a methodology.

Again, the argument is about whether heterophenomenology is capable of completely decribing the mind. You keep going back to the claim that heterophenomenology is neutral, but we aren't talking about that. I'm saying there are real phenomena it will never be able to account for, whether it makes a judgement about their existence or not.

Please rephrase that. I have no clue whatsoever what you are trying to say here.

We don't just know that red is different than green, or that red is the color these things are, and green is the color those things are. We know what they look like. Rosenberg talked about this. Of course, you can deny this and still produce a perfectly consistent theory. If we ignored high speed objects, Newtonian physics would work fine. But ignoring data to preserve an ideology is not something I want to do.
 
  • #86
selfAdjoint said:
To Dennett's CADBLIND machine, which can input frequency information from a screen and store it internally as numbers, the red and green in the example would be two different numbers. It compares by subracting the numbers; if it gets a nonzero difference, they are different. The machine could answer the question that they are different, and the only thing it would need to answer more questions is a vocabulary. Function has nothing to do with it. Apples of identical shapes might be red or green.

Who's side are you arguing here? Michael is functionally identical to one of those machines, except there is no question he experiences. His experiences of red and green are not accounted by merely saying he can distinguish them. If he sees red and green anything like we do, there is clearly a lot being missed.
 
  • #87
StatusX said:
Please explain which part of it is impossible. If you really think this is true, you shouldn't have tried to see where I was going, you should have stopped right here and explained why the premise is flawed.

Inverted qualia without any associated neural difference is empirically impossible. Even Chalmers admits as much, which is why I continue to find it odd that he and others still use it as an arguing point. It is logically possible, sure, but it is also logically possible that gravity can be inverted or that any number of physical laws could be other than what they are. It's like the epiphenomenal ectoplasm argument. I can imagine a physically identical universe with epiphenomenal ectoplasm in it. Physicalism cannot account for this ectoplasm, therefore physicalism is false.

How could she be made to respond to circles in the same way she responds to ellipses? They have different structures. A simple explanation of what an ellipse is could verify that her perception is wrong.

Why should inverted qualia be constrained to colors, though? For all we know, Michelle and Michael could have a cousin Michel that experiences the perception of a buffalo every time he sees a horse. He still calls it a "horse" and uses the same descriptive terms we use. His neural responses are even exactly the same as ours. Logically possible, right? Just as much as inverted colors. But it aint going to happen and its in fact rather irrelevant.

With colors, there is no way to determine that what Michael is seeing is really red, or if he sees it as green but calls it red.

Same with anything. When aunt Marsha looks at the moon, she might really be seeing Mars. What she calls craters are actually the canals. What she calls the man in the moon is actually Mons Olympus and that George Washington face. How can we determine what she is really seeing if she says the same things we do when describing the moon and her neural activity is exactly the same?

We know that red looks like something, and could logically imagine it looking like something else, or nothing at all, and still filling the same causal roles.

There are better arguments supporting the hard problem than this. Your argument is analogous to saying that we can logically imagine animism being true without any change in natural patterns and therefore any theory of natural causation is imcomplete if it cannot account for animal and vegetable spirits. If you're going to make an argument from direct experience, stick with direct experience. You've never directly experienced inverted qualia nor is there any reason whatsoever to believe that such a phenomenon could actually occur without any ability to detect it.

All I can do is point out exactly what you're denying when you say heterophenomenology is complete.

Can you please find an example of me making such a claim? I'm pretty sure that all I've claimed is that heterophenomenology is the best method we have and that it can account for any phenomenon that any other known method can.

A perfectly consistent theory could probably be constructed that ignores experience and explains all behavior, but it would be wrong, or at least incomplete.

Kind of like a theory of inverted qualia? Internally consistent and so logically possible, but can never actually be the case?

Again, the argument is about whether heterophenomenology is capable of completely decribing the mind.

Since when? The only argument I've made is that if heterophenomenology can't do it, there is no other method I've ever heard of that can.

You keep going back to the claim that heterophenomenology is neutral, but we aren't talking about that. I'm saying there are real phenomena it will never be able to account for, whether it makes a judgement about their existence or not.

I know, and you are saying this based on arguments that fail. Address the failings of these arguments or come up with better ones. The more important thing for you to do is to show me a method that can account for what you think heterophenomenology cannot. If you can't do that, you may as well complain about the imcompleteness of science because it cannot answer ethical questions. At least people that make those complaints try to think of methods that can answer ethical questions.

We don't just know that red is different than green, or that red is the color these things are, and green is the color those things are. We know what they look like.

I think you're going to have to develop this concept of what it means to "know" a fact such as this. I'm not convinced yet that such knowledge is even factual knowledge. You certainly aren't using the word "know" in a way that it is typically used. It seems more accurate to say that we are "acquainted with" the quality of color perception, or something to that effect. Why don't we forget this inverted qualia stuff and move down this road? This one might actually get us somewhere.

Rosenberg talked about this. Of course, you can deny this and still produce a perfectly consistent theory. If we ignored high speed objects, Newtonian physics would work fine. But ignoring data to preserve an ideology is not something I want to do.

Well, I'm glad to hear that you value your integrity so much. So do I.
 
  • #88
I have to admit I'm a little confused as to your where you stand, loseyourname. You don't really seem to be claiming qualia aren't real, or that they're within the scope of heterophenomenology. You're just saying that heterophenomenology will do as well as any other method in explaining them. If that's the case, I don't see why you have a problem with this thought experiment. I'm just trying to point out what it is that the method is missing. I'm not suggesting another way to answer these questions. I don't know if one exists that will answer them, but that doesn't mean we can't ask them.

There are still a few points I'd like to clear up though:

loseyourname said:
Your argument is analogous to saying that we can logically imagine animism being true without any change in natural patterns and therefore any theory of natural causation is imcomplete if it cannot account for animal and vegetable spirits. If you're going to make an argument from direct experience, stick with direct experience. You've never directly experienced inverted qualia nor is there any reason whatsoever to believe that such a phenomenon could actually occur without any ability to detect it.

There is no primary data for animism, but there is for qualia. And I don't quite understand how anything can be proven about qualia, such as inverted qualia being impossible. But even if it somehow could be shown empirically, it is still a priori possible that red and green be switched while preserving functional roles. Don't you see the importance of this? You can't switch the moon for mars, or circles for ellipses and preserve every last functional and structural role. It isn't even logically possible. You can have very confused people, but there is always a way to clear up the confusion by spelling out the terms precisely. If Mary points to a circle and says that one side is longer than the other, she is wrong and can be corrected. If Marsha points to the moon and describes the face she sees in enough detail (maybe excrutiating detail, down to the grains of sand, depending on the extent of her delusion), we'll find out she's talking about something else.

The reason is that we can verbalize structural and functional differences. But how could you ever verify that you see red the same way I do? There is no way, which is why the logical possibility that we could see it differently arises in the first place. The logical possibility is all that's important here. I'm not saying they could actually be different in this world, I'm saying that nothing in the definition would make this logically impossible.

Now, getting back to Michael, I have asserted he responds to green and red in exactly the same way. The neurons attatched directly to his eye obviously react differently, but the way they are connected to the rest of his brain is completely symmetrical, so that they could be interchanged without an effect on his behavior. Don't think this is possible for a human? Fine, use a machine like the one selfadjoint mentioned, as long as you're willing to admit that machine can experience. (unless you think an inherent assymetry to color processing is a necessary condition to experience) Now, I think that Michael (or the machine) has specific, distinct experiences of red and green. If so, then switching every instance of red and green in the world will not affect his behavior, but will affect his experience.

Let me go over this once more, questioning any assumptions I've made. We have a boy (or machine) that responds to two colors, but does so symmetrically. That is, if you switched every instance of the two colors, he (it) would behave identically. If this is agreed to be possible (for a very simplistic human, or if not, a machine), then we need to ask does he have an experience associated with the two colors? If so, switching the colors changes the experience. So the only question left is, has heterophenomenology allowed for this change? I don't think any reasonable interpretation of the method could. If every single aspect of his behavior is identical, everything the theory could possibly talk about is identical.

The only possible loophole is in that nerve right behind his eye. But the functions of the nerves are identical, so a switch between worlds only changes which of them is being activated in a given situation. For simplicity, we might assume the two circuits are mirror images of each other, flipped about Michael's center. Then the question is, could reflecting Michael about his center (I'm starting to feel bad for this kid) have a meaningful impact on his subjective world, as heterophenomenology describes it?

Any reasonable physicist would have to say there could be no change. And yet, if Michael does have distinct experiences accompanying these colors, there must be one. And if Michael can tell them apart, he must have distinct experiences. This leads into the second issue:

I think you're going to have to develop this concept of what it means to "know" a fact such as this. I'm not convinced yet that such knowledge is even factual knowledge. You certainly aren't using the word "know" in a way that it is typically used. It seems more accurate to say that we are "acquainted with" the quality of color perception, or something to that effect. Why don't we forget this inverted qualia stuff and move down this road? This one might actually get us somewhere.

This is a pretty deep issue. It seems to me that what we know is exactly what we experience. We know facts because we experience the thoughts about those facts. (note that I'm talking about what the experiencer knows at a given instant of time) Since what we experience can be correlated to what our neurons are doing, you might think we are restricted to the knowledge "in" our neurons, ie, that which could be extracted in a detailed scan of our brain. But we also "know" what the experiences are like. We know what it feels like to know a fact or see a color.

This may not be the traditional defintion of knowledge, but I think on a little reflection you'll see it's accurate. You can't justifiably claim that what we know is what is in our neurons. How do we have access to those neurons? This might seem like a ridiculous question to an eliminativist, who would simply say we are those neurons. But there really is a seperation. If we were just our neurons, why don't we know what all our neurons are doing, or for that matter, what our stomach cells are doing? How do we even know for certain that we are made of neurons and not chinese people? This might be getting pretty far from the original topic, but these are all important issues to the overarching mind problem.
 
Last edited:
  • #89
LYN said:
Can you write down formulae for the way an ellipse seems or the way a circle seems?

Yes -- in that it is the same thing. (Well, a circle looked at from
an angle appears elliptical,but that can all be dealy with mathematically too).

You completely missed my point anyway. It was only that if Michelle had no concept of these things (she doesn't know any geometry and can't write formulae), they would be foreign to her and she would have no way of describing the difference between a circle and an ellipse, any more than Michael could describe the difference between green and red.

And *my* point (as per the original Mary gedanken) is that a complete
knowledge of maths, science etc leaves you with specific gaps in understanding experience. You would be able to predict what a dodecahedron looks like, without having seen one , but not what colours look like
or tastes taste like.

Inverted qualia without any associated neural difference is empirically impossible.

I agree. It's a useless way of arguing for quaia anyway, since it implies they are epiphenomenal.


I think you're going to have to develop this concept of what it means to "know" a fact such as this. I'm not convinced yet that such knowledge is even factual knowledge. You certainly aren't using the word "know" in a way that it is typically used.

Au contraire, people are always saying "you don't know what it is like"
(about childbirth, for instance) -- meaning that you have no first-hand experience.
 
Last edited:
  • #90
First off, I was trying to say that you can't write an equation to describe the experience of seeing an ellipse or a circle. That's the argument with colors, right? Because certainly you can write an equation that describes the physical underpinning that causes that experience. I fail to see a difference between the two categories of experience.

Tournesol said:
Au contraire, people are always saying "you don't know what it is like" (about childbirth, for instance) -- meaning that you have no first-hand experience.

Yeah, I know. We were just talking about this in a class I'm taking on why humans have such a propensity for war. What I was saying is that I don't think what you are talking about is factual knowledge. It is more of a personal acquaintance. I'm not going to go so far as to say it's a misusage of the verb "to know," simply because it is so commonly used, but clearly there is a distinction.

I'm reminded of an argument against the omniscient of God. It is said that God does not know what it is like to ride a bike. He has no legs or body, and even if you are Christian and believe that God took human form 200 years ago, there were no bikes. Since God doesn't know what it's like to ride a bike, God doesn't know everything and so cannot be omniscient. The theist responds by saying that this is nonsense; having the experience of riding a bike is not in the category of factual knowledge.

Anyway, let's get back to what we were talking about in my class earlier today. We read Susan Sontag's Regarding the Pain of Others over the weekend, a short book about photographic depictions of wartime atrocity and how these effect the viewer. Certain people in the class were arguing, drawing from Sontag, that unless you've been directly exposed to war, you'll never know war, you'll never understand war. I got to thinking, however, about all of the research I've done in the past about the US Civil War. I know the name of every commanding officer and the death tolls from every major battle. I've read the letters written by Abe Lincoln, describing his decision-making process. In fact, forget about me and consider men who are actually scholars of the Civil War, who have spent their entire lives looking at every single motivating factor that went into the creation of this conflict. These men know the Civil War far better than the rural farmer in western Georgia that had her farm blazed over during Sherman's March and saw her husband, both sons, and a brother die. They understand the war better, even though they never experienced it.

It seems compelling to conclude, in light of these and other cases, that there are really two different things meant by the verb "to know."
 
  • #91
StatusX said:
I have to admit I'm a little confused as to your where you stand, loseyourname. You don't really seem to be claiming qualia aren't real, or that they're within the scope of heterophenomenology. You're just saying that heterophenomenology will do as well as any other method in explaining them.

You'll have to be clearer on what you mean by "qualia" as it can be a rather loaded term. I don't think I really have a stance on this in a way that would please you. I think that the argument's I've seen from you claiming to demonstrate that a heterophenomenological method cannot, in principle, account for these qualia fail. I'm not going to make the opposite claim, though. What I will claim is that, from what I've seen, I have every reason to believe that if a heterophenomenological method won't answer your questions, neither will any other method. It might just be one of those "inert facts" that Dennett talks about, like the gold in his teeth. No amount of historical or chemical research can ever tell us whether or not this gold was every owned by Julius Caesar, but nobody considers this a failing of history or chemistry. Sometimes there are simply factual question, meaningful factual questions that do have answers, that nonetheless cannot be answered.

There is no primary data for animism, but there is for qualia. And I don't quite understand how anything can be proven about qualia, such as inverted qualia being impossible.

The point is that there is absolute no evidence, no primary datum, to suggest that inverted qualia is possible, despite the fact that a theory inclusive of such a phenomenon would lead to no contradictions. Sure, it is logically possible to have inverted qualia, but for the purpose of which you seem to want to use that fact, it makes no difference.

I'm just taking your argument form and substituting in different instantiations of the statement variables to make it clear that it is not a valid argument form. I suppose I could construct a truth table, but I don't know how easy that would be using the LaTex tags we have available. Keep in mind that I'm not saying that I think all of your arguments are invalid in this way. I just don't understand why it is that people seem to think it's okay to claim one hypothesis must be incorrect simply because it is logically possible for that to be the case, or it is logically possible for a competing hypothesis to be true. The simple fact is, no theory of consciousness needs to account for things like zombies and inverted qualia any more than evolutionary biology needs to account for the sequential divine creation of individual species.

The reason is that we can verbalize structural and functional differences. But how could you ever verify that you see red the same way I do?

I still don't see why you are singling out colors. Think about Michelle once again, if you will, but this time granting her geometric knowledge. Every time she sees looks at a circle, she sees an ellipse. But because of the way she has always experienced this sight, she refers to the shape as being "round" and as having the equation of a circle. She says that it is equal in width and height, even though it is not. She just misunderstands these terms! How would you be able to tell the difference here? It's the same as the example with colors. Every time Michael looks at the color red, he sees green. He calls it "red," however, and describes it as a warm color with a long wavelength of light. I honestly can't tell the difference between these two situations! What the heck is the fascination with color?

This is a pretty deep issue. It seems to me that what we know is exactly what we experience. We know facts because we experience the thoughts about those facts. (note that I'm talking about what the experiencer knows at a given instant of time) Since what we experience can be correlated to what our neurons are doing, you might think we are restricted to the knowledge "in" our neurons, ie, that which could be extracted in a detailed scan of our brain. But we also "know" what the experiences are like. We know what it feels like to know a fact or see a color.

This may not be the traditional defintion of knowledge, but I think on a little reflection you'll see it's accurate. You can't justifiably claim that what we know is what is in our neurons. How do we have access to those neurons? This might seem like a ridiculous question to an eliminativist, who would simply say we are those neurons. But there really is a seperation. If we were just our neurons, why don't we know what all our neurons are doing, or for that matter, what our stomach cells are doing? How do we even know for certain that we are made of neurons and not chinese people? This might be getting pretty far from the original topic, but these are all important issues to the overarching mind problem.

First off, I don't think the question of whether or not our minds are built up of Chinese people is one that needs to be taken seriously by philosophers of mind. The rest of this I began to address with my response to Tournesol. I'll see what the two of you have to say before elaborating. I think that this dual usage of the verb "to know" is involved in Paul Churchland's refutation of the Mary argument. Perhaps I will look into that.
 
  • #92
First of all, I'm not arguing for the actual, metaphysical possibility of inverted qualia in normal people. I'm saying that it is possible that two colors could fill the exact same functional role for some very simplistic human, or machine. If inverted qualia are impossible, it is because of some inherent structural difference between red and green, like maybe that there are more shades of green than red, or red is closer to orange than green. But if they really could fill identical functional and structural roles (which might require some change to the thought experiment to further restrict what Michael is allowed to see and do), there is clearly no reason the experiences couldn't be switched. (I'm suprised you're still not seeing why this can't be true with shapes. "Longer" and "distance" are non-subjective, unambiguous terms. How could she misunderstand width and height in some amazingly contrived way that would allow her to see circles as ellipses, but still function normally in the rest of her life?) It is possible Michael doesn't see red and green the same way we do, but if he sees them as anything, then heterophenomenology is predicting a symmetry that isn't really there.

Which is connected to what I was saying about knowledge. If Michael knows red and green are different, he must have distinct accompanying experiences of them, because our experiences are all we can know. Undoubtedly, neurons can explain why he reports they are different. But we're talking about the inner experiencing being. And whether this kind of knowledge is the standard one or not, it is a natural phenomenon and thus something that needs to be explained.

One more thing. The analogy to inert historical facts is one of the most absurd things I've ever heard. Clearly the problem there is either that it would require an unreasonable amount of calculation, or that we're limited by uncertainty. But there is no hard problem of inert historical facts, and scientific methods aren't inherently incapable of answering those questions. And even more importantly, those are trivial, specific facts, where as experience is arguably one of the most fundamental phenomena of the the universe.

EDIT: Can you explain how Michelle would see the shape in the attached picture?
 

Attachments

  • ell.GIF
    ell.GIF
    939 bytes · Views: 423
Last edited:
  • #93
loseyourname said:
First off, I was trying to say that you can't write an equation to describe the experience of seeing an ellipse or a circle.


That isn't quite the question. The question is about what things look
like. A circle will look like a circle or an ellipse -- either way what-it-looks-like is describable mathematically.

That's the argument with colors, right? Because certainly you can write an equation that describes the physical underpinning that causes that experience. I fail to see a difference between the two categories of experience.

Meaning neither is describable, or both are describable ?

Yeah, I know. We were just talking about this in a class I'm taking on why humans have such a propensity for war. What I was saying is that I don't think what you are talking about is factual knowledge. It is more of a personal acquaintance. I'm not going to go so far as to say it's a misusage of the verb "to know," simply because it is so commonly used, but clearly there is a distinction.

It isn't supposed to be objective knowledge -- that is kind of the point.
What are you implying? That subjective knowledge isn't proper knwoledge ?

I'm reminded of an argument against the omniscient of God. It is said that God does not know what it is like to ride a bike. He has no legs or body, and even if you are Christian and believe that God took human form 200 years ago, there were no bikes. Since God doesn't know what it's like to ride a bike, God doesn't know everything and so cannot be omniscient. The theist responds by saying that this is nonsense; having the experience of riding a bike is not in the category of factual knowledge.

I think the theist could reply that God could know this miraculously.

It seems compelling to conclude, in light of these and other cases, that there are really two different things meant by the verb "to know."

Well yyyeeeesss. Qualiaphiles aren't saying subjective knowledge overrides
objective knowledge -- they are just saying that the two are different.
 
  • #94
loseyourname said:
You'll have to be clearer on what you mean by "qualia" as it can be a rather loaded term.

C.I Lewis's original definition of qualia:-

"There *are* recognizable qualitative characters of the
given, which may be repeated in different experiences,
and are thus a sort of universals; I call these "qualia."
But although such qualia are universals, in the sense of being
recognized from one to another experience, they must
be distinguished from the properties of objects. Confusion
of these two is characteristic of many historical
conceptions, as well as of current essence-theories.
The quale is directly intuited, given, and is not the
subject of any possible error because it is purely subjective."

The point is that there is absolute no evidence, no primary datum, to suggest that inverted qualia is possible,

surely you mean indetectable inverted qualia. Things like colour blindness are not too far from real inverted qualia.

I still don't see why you are singling out colors. Think about Michelle once again, if you will, but this time granting her geometric knowledge. Every time she sees looks at a circle, she sees an ellipse. But because of the way she has always experienced this sight, she refers to the shape as being "round" and as having the equation of a circle. She says that it is equal in width and height, even though it is not. She just misunderstands these terms! How would you be able to tell the difference here?

I think it would be very easy to spot someone who systematically misunderstands the word "equal". This isn't even a logical possibility.


First off, I don't think the question of whether or not our minds are built up of Chinese people is one that needs to be taken seriously by philosophers of mind.

Errmm..it's a reductio ad absurdum...it's supposed to be silly.
 
  • #95
Tournesol said:
Things like colour blindness are not too far from real inverted qualia.

Color blindness is completely explained by physical factors. How can it be "qualia" which by the definition you gave ("purely subjective") are not?
 
  • #96
StatusX said:
First of all, I'm not arguing for the actual, metaphysical possibility of inverted qualia in normal people. I'm saying that it is possible that two colors could fill the exact same functional role for some very simplistic human, or machine. If inverted qualia are impossible, it is because of some inherent structural difference between red and green, like maybe that there are more shades of green than red, or red is closer to orange than green.

The structural difference is pretty clear. They are caused by different wavelengths of light. Higher frequency photons have more energy and react with our retinae in a different way than lower frequency photons. Because they are different reactions, our brain perceives them differently. Any difference in perception from person to person would have to be a difference in the way their brain processes information - a difference that should be detectable empirically.

"Longer" and "distance" are non-subjective, unambiguous terms. How could she misunderstand width and height in some amazingly contrived way that would allow her to see circles as ellipses, but still function normally in the rest of her life?)

They are no less subjective or ambiguous than the terms we use to describe color. Hue and richness and warmth are every bit as measurable as length and distance. She wouldn't understand these terms in any contrived way. Her misperception would need to be fairly systematic, but I see no contradictions arising, which is all that "logically possible" means. This situation, no matter how impossible, could occur without a paradox arising.

It is possible Michael doesn't see red and green the same way we do, but if he sees them as anything, then heterophenomenology is predicting a symmetry that isn't really there.

No clue what you mean here. I wasn't aware that heterophenomenology predicted anything other than that some of our beliefs about our experiences are incorrect.

Which is connected to what I was saying about knowledge. If Michael knows red and green are different, he must have distinct accompanying experiences of them, because our experiences are all we can know. Undoubtedly, neurons can explain why he reports they are different. But we're talking about the inner experiencing being. And whether this kind of knowledge is the standard one or not, it is a natural phenomenon and thus something that needs to be explained.

I agree. Why humans see colors in the way they see them is indeed something that any science of consciousness should have to explain.

One more thing. The analogy to inert historical facts is one of the most absurd things I've ever heard.

Why? One is something that clearly does not need to be explained, simply because it cannot be. The other is yet another thing that may not be explainable. If that is the case, they are analagous.

Clearly the problem there is either that it would require an unreasonable amount of calculation, or that we're limited by uncertainty. But there is no hard problem of inert historical facts, and scientific methods aren't inherently incapable of answering those questions. And even more importantly, those are trivial, specific facts, where as experience is arguably one of the most fundamental phenomena of the the universe.

Two situations needn't be congruent to be analagous. Don't try to put words in my mouth. All I said was that they had the one similarity I've noted above.

EDIT: Can you explain how Michelle would see the shape in the attached picture?

I suppose she would see one circle. That's a good point. Perhaps I should change her misperception to mistaking triangles for quadrangles. How would Michael see sub-green wavelengths of light? As infrared?
 
  • #97
loseyourname said:
The structural difference is pretty clear. They are caused by different wavelengths of light. Higher frequency photons have more energy and react with our retinae in a different way than lower frequency photons. Because they are different reactions, our brain perceives them differently. Any difference in perception from person to person would have to be a difference in the way their brain processes information - a difference that should be detectable empirically.

But we're not talking about photons, were talking about his experience. We can have visual experiences in the absence of photons, so they are not necessary prerequisites. And I'm saying Michael processes them identically. All he can do is distinguish them.

Imagine a simple circuit that can detect radio waves of a certain frequency. Two of these are built, one that outputs a pulse if a low frequency (lf) wave hits the antenna, and the other outputs a pulse if a high frequency (hf) wave hits it. Attatched to these two circuits is a simple digital processor. Now, two waves are sent in succession. As they are received, the appropriate detector circuit passes on a signal. The digital processor's job is to determine if both signals came from the same detector, that is, if the two waves were the same or not. That's all it can do. This system is symmetric in hf and lf waves, in that if you switched all the instances of hf waves for lf waves and lf for hf, it would respond the same.

Clearly, the output of the system, and the digital circuit which creates that output is symmetric. But the detectors themselves are not. When you switch the instances of the colors, you are switching which detector is activated when. Now assume this system has a subjective world. (since this is just a simpler model of Michael's brain). I've been saying a heterophenomenologist shouldn't care about this difference, because it has no bearing on the system's behavior.

Now, maybe I'm wrong, and they would claim the subjective world is different because of this change. But here's my main point: Since the difference has no functional effect, the heterophenomenologist would have to be admitting that there are aspects of the subjective world that aren't acounted for by the subject's behavior. They'd be saying that changing the way a subject is internally constructed could affect this abstract notion of subjectivity, even if every interaction the subject has with its environment is unaffected. From what I've read, I don't think that they would call such a change significant.

In fact, a method very similar to heterophenomenolgy (differing mainly in interpretation) could use this thought experiment to claim that the machine's experience of hf and lf waves arises in the detector circuits, not the digital circuit, because switching hf for lf would obviously switch experiences, but only the detector circuits would be different. Bringing this back to human beings, it means that our experience of colors arises somewhere between the retinas and the place where judgements about colors are made.

They are no less subjective or ambiguous than the terms we use to describe color. Hue and richness and warmth are every bit as measurable as length and distance. She wouldn't understand these terms in any contrived way. Her misperception would need to be fairly systematic, but I see no contradictions arising, which is all that "logically possible" means. This situation, no matter how impossible, could occur without a paradox arising.

Perhaps I should change her misperception to mistaking triangles for quadrangles. How would Michael see sub-green wavelengths of light? As infrared?

Again, I'm talking about experience. Maybe Michael would experience subgreen wavelengths the same way some birds experience infrared wavelengths. How would we ever know? And clearly the triangle/quadrangle confusion would not be possible, as is the case with any shapes. It would mean she'd have to confuse the numbers three and four, which you could straightforwardly derive a paradox from. (eg, ask her to take away three apples from a pile of four).

If my thought experiment fails, it is because I'm incorrectly assuming heterophenomenology would ignore the non-behavioral change caused by the switch, as I described above. If this is the case, then I don't really have a problem with the method, although I would take it as a starting point (ie, axiom) that qualia really do exist. But it does not fail because you could replace arbitrary objects for "red experience" and "green experience" and reach an absurdity. The only things you could switch in are things that have no functional or structural properties, ie, intrinsic experiences.
 
Last edited:
  • #98
selfAdjoint said:
Color blindness is completely explained by physical factors. How can it be "qualia" which by the definition you gave ("purely subjective") are not?

Colour blindness is only "completely explained" in terms of antomical differences and funcitonal competencies. If you are awkward enough to want to know
how things look to someone with red-green colour blindness, the explanation
is not going to tell you.
 
  • #99
Tournesol said:
Colour blindness is only "completely explained" in terms of antomical differences and funcitonal competencies. If you are awkward enough to want to know
how things look to someone with red-green colour blindness, the explanation
is not going to tell you.

How do things "look" to someone who isn't color blind?
 
  • #100
Tournesol said:
Colour blindness is only "completely explained" in terms of antomical differences and funcitonal competencies. If you are awkward enough to want to know
how things look to someone with red-green colour blindness, the explanation
is not going to tell you.

This is just petitio principli, aka begging the question. You assume the explanation is incomplete because of qualia, but qualia and their existence was the point of the excercise. I don't accept qualia, so I believe the explanation is complete.
 
Back
Top